Git Product home page Git Product logo

sofa-bolt's Introduction

SOFABolt Project

Build Status Coverage Status license version Percentage of issues still open

1. 介绍

SOFABolt 是蚂蚁金融服务集团开发的一套基于 Netty 实现的网络通信框架。

  • 为了让 Java 程序员能将更多的精力放在基于网络通信的业务逻辑实现上,而不是过多的纠结于网络底层 NIO 的实现以及处理难以调试的网络问题,Netty 应运而生。
  • 为了让中间件开发者能将更多的精力放在产品功能特性实现上,而不是重复地一遍遍制造通信框架的轮子,SOFABolt 应运而生。

Bolt 名字取自迪士尼动画-闪电狗,是一个基于 Netty 最佳实践的轻量、易用、高性能、易扩展的通信框架。 这些年我们在微服务与消息中间件在网络通信上解决过很多问题,积累了很多经验,并持续的进行着优化和完善,我们希望能把总结出的解决方案沉淀到 SOFABolt 这个基础组件里,让更多的使用网络通信的场景能够统一受益。 目前该产品已经运用在了蚂蚁中间件的微服务 (SOFARPC)、消息中心、分布式事务、分布式开关、以及配置中心等众多产品上。

2. 功能介绍

intro

SOFABolt 的基础功能包括:

  • 基础通信功能 ( remoting-core )
    • 基于 Netty 高效的网络 IO 与线程模型运用
    • 连接管理 (无锁建连,定时断链,自动重连)
    • 基础通信模型 ( oneway,sync,future,callback )
    • 超时控制
    • 批量解包与批量提交处理器
    • 心跳与 IDLE 事件处理
  • 协议框架 ( protocol-skeleton )
    • 命令与命令处理器
    • 编解码处理器
    • 心跳触发器
  • 私有协议定制实现 - RPC 通信协议 ( protocol-implementation )
    • RPC 通信协议的设计
    • 灵活的反序列化时机控制
    • 请求处理超时 FailFast 机制
    • 用户请求处理器 ( UserProcessor )
    • 双工通信

用法1

将 SOFABolt 用作一个远程通信框架,使用者可以不用关心如何实现一个私有协议的细节,直接使用我们内置的 RPC 通信协议。可以非常简单的启动客户端与服务端,同时注册一个用户请求处理器,即可完成远程调用。同时,像连接管理、心跳等基础功能特性都默认可以使用。 当前支持的调用类型如下图所示:

invoke_type

用法2

将 SOFABolt 用作一个协议框架,使用者可以复用基础的通信模型、协议包含的接口定义等基础功能。然后根据自己设计的私有协议自定义 Command 类型、Command 处理器、编解码处理器等。如下图所示,RPC 和消息的 Command 定义结构:

msg_protocol

4. 如何贡献

开放代码允许在签署协议之后,提交贡献代码。具体参考如何参与贡献 SOFABolt 代码

对 SOFABolt 代码的修改和变更,需要遵守版权协议

5. 多语言

6. 用户

蚂蚁集团 网商银行 恒生电子 数立信息
Paytm 天弘基金 **人保 信美相互
南京银行 民生银行 重庆农商行 中信证券
富滇银行 挖财 拍拍贷 OPPO金融
运满满 译筑科技 杭州米雅信息科技 邦道科技
申通快递 深圳大头兄弟文化 烽火科技 亚信科技
成都云智天下科技 上海溢米辅导 态赋科技 风一科技
武汉易企盈 极致医疗 京东 小象生鲜
北京云族佳 欣亿云网 山东网聪 深圳市诺安赛威
上扬软件 长沙点三 网易云音乐 虎牙直播
**移动 无纸科技 黄金钱包 独木桥网络
wueasy 北京攸乐科技 易宝支付 威马汽车
亿通国际 新华三 klilalagroup

7. 联系我们

  • 微信

    • 公众号:金融级分布式架构(Antfin_SOFA):致力于打造一流的分布式技术在金融场景应用实践的技术交流平台,专注于交流金融科技行业内最前沿、可供参考的技术方案与实施路线。

      Wechat
  • 钉钉

    • 钉钉交流群:

      • 金融级分布式架构 SOFAStack 1群,群号:23127468 已满

      • 金融级分布式架构 SOFAStack 2群,群号:23195297 已满

      • 金融级分布式架构 SOFAStack 3群,群号:23390449 已满

      • 金融级分布式架构 SOFAStack 4群,群号:23372465 已满

      • 金融级分布式架构 SOFAStack 5群,群号:30315793 已满

      • 金融级分布式架构 SOFAStack 6群,群号:34197075

        DingTalk
    • 钉钉交流群:SOFAStack 金牌用户服务群,如果您已经在生产环境使用了 SOFAStack 相关组件,还请告知我们,我们将会邀请您加入到此群中,以便更加快捷的沟通和更加高效的线上使用问题支持。

sofa-bolt's People

Contributors

chuailiwu avatar cytnju avatar dbl-x avatar dependabot[bot] avatar easonzhang1992 avatar evenljj avatar funky-eyes avatar glmapper avatar jervyshi avatar joecqupt avatar leeyazhou avatar lollapalooza1989 avatar nobodyiam avatar orezzero avatar sanshengshui avatar seeflood avatar ujjboy avatar welkinxu avatar xmtsui avatar yangjinjue avatar yangl avatar zhaojigang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sofa-bolt's Issues

ProcessorManager默认线程池和RemotingProcessor线程池的设定需要在RpcServer.start()之后,应该在start()之前进行吧?

Your question

Processor默认线程池和RemotingProcessor线程池的设定需要在RpcServer.start()之后,代码如下:

RpcServer server = new RpcServer(8888);
server.start(); // 启动服务
server.registerDefaultExecutor(RpcProtocol.PROTOCOL_CODE, Executors.newCachedThreadPool());
server.registerDefaultExecutor(RpcProtocolV2.PROTOCOL_CODE, Executors.newCachedThreadPool());
server.registerProcessor(RpcProtocol.PROTOCOL_CODE, RpcCommandCode.RPC_REQUEST, new RpcRequestProcessor(Executors.newCachedThreadPool()));

如上这样使用,注册是需要发生在start()之后的,因为注册的代码需要用到ProccessorManager,在start()中的doInit()的时候,才会创建ProccessorManager。
如果注册必须发生在start()之后,那么如果在 RpcServer 启动之后并且在设置 registerDefaultExecutor 之前来了一个请求,请求的处理就放在原始的默认线程池去做了,与用户的初衷不同。

Your advice

是不是我的使用姿势不对?如果确实是这样使用的,那么建议 RpcServer 的 init() 和 start() 应该分开,提供类似于如下的代码,这样我们就可以在 init() 之后进行线程池的设置了,设置完成之后再 start()。

init() {
  doInit();
}
startOnly() {
  doStart();
}
start() {
  init();
  startOnly();
}

Environment

  • SOFABolt version: 1.5.1
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:

misuse of hashmap api cause warn log missing

bug description

thx @YangLi for pointing out these problems:
image
this bug is a mis-use of api, we should check key not value existed and then print warn log to notify user.

impact analysis

actually this bug have no bad impact except some warning log missing

ProtocolCodeBasedDecoder 相关问题

ProtocolCodeBasedDecoder
代码中没有校验报文的开始或结束符,万一报文有问题(例如报文错乱),那会不会一直导致误以为这连接对应的流还是可信的,一直在误读?

Supports custom netty ChannelDuplexHandler

Is your feature request related to a problem? Please describe.

I want to implement the protocol in HTTP pipeline semantic, just like

https://github.com/spinscale/netty4-http-pipelining/blob/master/src/main/java/de/spinscale/netty/http/pipelining/HttpPipeliningHandler.java

But there is no way to add a such custom ChannelDuplexHandler to netty under bolt framework.

Describe the solution you'd like

  1. Supports user custom netty ChannelDuplexHandler to be added.
  2. Or a bolt pipeline handler to be extendted that wraps netty handler, do not export netty directly.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

epoll fully support wanted

advice

As sofa-bolt build on top of netty-4.1.25.Final, and netty 4 support epoll nativelly.
For performance, bolt should support epoll per user-like.
So here I add an option transport.use.epoll so that user can determine whether enabled it or not under Linux system.

Environment

  • SOFABOLT version: v1.4.2
  • OS version: Linux Family

Android处理Bolt协议测试情况

问题

Andorid端能否使用Bolt进行协议通讯

使用场景

长连接推送服务
实现细节:服务端使用Bolt作为远程通讯框架,Android通过TCP协议连接服务端,那么Android支持解析Bolt协议吗?

Environment

  • SOFABolt version: 1.4.2
  • JVM version (JDK 8):
  • OS version (CentOS7):
  • Maven version: Maven 3
  • IDE version: IDEA

change the priority of global switch

Describe the bug
Currently system settings for global switch take effect before user settings.
So there may be two instance of RpcClient in one java process, and the configs of the first instance setting by system property will change the default behavior of the second one.

 @Override
    public boolean isOn(int switchIndex) {
        return this.userSettings.get(switchIndex) || systemSettings.get(switchIndex);
    }

Expected behavior

User settings takes effect before system settings

Actual behavior

System settings takes effect before user settings

InvokeFuture应该支持泛型

InvokeFuture应该支持泛型,以实现用户将Bolt作为一个基础通信框架使用,但是不使用Bolt协议的场景。
(不限制用户一定要使用ProtocolCode和RemotingCommand)

ConnectionFactory的设计无法支持用户自定义协议

  1. 用户无法指定自己的协议
    • ConnectionFactory无法使用用户的Encoder和Decoder,需要继承ConnectionFactory实现自己的ConnectionFactory
  2. ConnectionFactory接口不应当包含registerUserProcessor的方法
    • ConnectionFactory应当只负责Connection相关的逻辑,比如Connection的创建
    • Processor是用户的处理逻辑,应该注册到Netty的handler中

如何基于 ConfigItem 扩展自定义的配置项?

Your question

image

  1. 假设现在想增加一个自定义配置项 NETTY_TCP_NODELAY,由于 ConfigItem 是一个 enum 类,无法继承,这时候我们应该怎么添加一个自定义的的扩展项呢?

  2. ConfigItem 和 Configs 类冗余,建议去掉 ConfigItem 类,
    image

ConfigContainer中的第二个参数用 Configs 中的String 来填充。

  • SOFABolt version: 1.5.1
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:

refactor initRpcRemoting() when initialize RpcServer

The method initRpcRemoting() of RpcServer previously need an arg RpcRemoting, actually it do not need this. We can just provide a default implementation, and change the visibility to protected, so the sub class and override this to create a specific RpcRemoting instance.

AbstractBatchDecoder 批量解码器在高并发下可能存在bug

image

我使用dubbo做了一个简易的rpc测试,使用AbstractBatchDecoder代替ByteToMessageDecoder之后,在handler中判断List还是单个response。经过高并发的压测,发现始终有个别请求会响应失败,异常未知;地并发下很稳定。官方是否有提供过压测脚本,AbstractBatchDecoder到底能提升多少qps?以及是否存在响应失败的问题

enable PooledByteBufAllocator

Is your feature request related to a problem? Please describe.
Netty has used ByteBufAllocator as default

Describe the solution you'd like
change the default config in bolt to true

    public static final String NETTY_BUFFER_POOLED                   = "bolt.netty.buffer.pooled";
    public static final String NETTY_BUFFER_POOLED_DEFAULT           = "false";

导入eclipse 报错

: is an invalid character in resource name 'com.alipay.sofa:bolt'.
导入eclipse 报错

代码实现细节的问题

  1. RemotingServer抽象成接口比当前的抽象类要好(逻辑更加清晰)
  2. 代码中有很多警告
  3. 一些拼写错误

timeout fail-fast switch of UserProcessor do not work

bug description

RemotingContext
image
this bug lead to an incorrect set of member variable

  • this issue should be scanned out by pmd, but rule UnusedFormalParameter failed to discover this. And I have issued in pmd repo
  • test case of this feature also have problems, need to be fixed also

impact analysis

this bug will cause timeout fail-fast switch do not work. If you have used this feature, should pay attention to our release note and do an upgrade.

关于log

我在win7下使用IntelliJ IDEA运行程序,发现只能打印部分log。
不能打印log的java文件中Logger定义如下:
private static final Logger logger = LoggerFactory.getLogger(XXX.class); //其中XXX为java类名。
能打印log的java文件中Logger定义如下:
private static final Logger logger = BoltLoggerFactory.getLogger("XXXXX");//其中XXX为log-conf.xml中定义的logger name。

请问:如果我需要看到所有的日志,该怎么做?

用户手册中关于连接不预热的说法在 1.5.1 分支不对

https://github.com/alipay/sofa-bolt/wiki/SOFA-Bolt-Handbook

2.3 建立多连接与连接预热
通常来说,点对点的直连通信,客户端和服务端,一个 IP 一个连接对象就够用了。不管是吞吐能力还是并发度,都能满足一般业务的通信需求。而有一些场景,比如不是点对点直连通信,而是经过了 LVS VIP,或者 F5 设备的连接,此时,为了负载均衡和容错,会针对一个 URL 地址建立多个连接。我们提供如下方式来建立多连接,即发起调用时传入的 URL 增加如下参数 127.0.0.1:12200?_CONNECTIONNUM=30&_CONNECTIONWARMUP=true,表示针对这个 IP 地址,需要建立30个连接,同时需要预热连接。其中预热与不预热的区别是:

预热:即第一次调用(比如 Sync 同步调用),就建立30个连接
不预热:每一次调用,创建一个连接,直到创建满30个连接

不预热应该是:第一次调用,先同步创建一个链接,然后开启异步线程池去做剩余的连接操作才对

Callback异步调用需要latch不是很方便

callback异步调用方式,必须和 final CountDownLatch latch结合使用,否则就会出现
com.alipay.remoting.exception.ConnectionClosedException: Connection closed when invoke with callback.The address is 127.0.0.1:8999
感觉不是很方便,这有什么改进方法么?

return more accurate exception info to client?

In

com.alipay.remoting.rpc.RpcResponseResolver#preProcess

bolt only return custom errMsg, without real exception

 case SERVER_SERIAL_EXCEPTION:
                    msg = "Server serialize response exception! the address is " + addr + ", id="
                          + responseCommand.getId() + ", serverSide=true";
                    e = new SerializationException(msg, true);
                    break;
                case SERVER_DESERIAL_EXCEPTION:
                    msg = "Server deserialize request exception! the address is " + addr + ", id="
                          + responseCommand.getId() + ", serverSide=true";
                    e = new DeserializationException(msg, true);
                    break;

when client receive exception info ,they need to login to server to determine the real exception

server start throw exception?

com.alipay.remoting.RemotingServer

now RemotingServer won't throw exception to outside.

 /**
     * Start the server with ip and port.
     */
    public boolean start(String ip) {
        this.init();
        if (started.compareAndSet(false, true)) {
            try {
                logger.warn("Server started on " + ip + ":" + port);
                return this.doStart(ip);
            } catch (Throwable t) {
                started.set(false);
                logger.error("ERROR: Failed to start the Server!", t);
                return false;
            }
        } else {
            logger.error("ERROR: The server has already started!");
            return false;
        }
    }

so rpc framework can not get the accurate info about the error. we only can inform users to see bolt log like this

com.alipay.sofa.rpc.core.exception.SofaRpcRuntimeException: Failed to start bolt server, see more detail from bolt log.
	at com.alipay.sofa.rpc.server.bolt.BoltServer.start(BoltServer.java:112) ~[sofa-rpc-all-5.4.0.jar:5.4.0]
	at com.alipay.sofa.rpc.boot.container.ServerConfigContainer.startServers(ServerConfigContainer.java:78) ~[rpc-sofa-boot-starter-5.4.0.jar:5.4.0]
	at com.alipay.sofa.rpc.boot.context.SofaBootRpcStartListener.onApplicationEvent(SofaBootRpcStartListener.java:68) ~[rpc-sofa-boot-starter-5.4.0.jar:5.4.0]
	at com.alipay.sofa.rpc.boot.context.SofaBootRpcStartListener.onApplicationEvent(SofaBootRpcStartListener.java:38) ~[rpc-sofa-boot-starter-5.4.0.jar:5.4.0]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:166) ~[spring-context-4.3.4.RELEASE.jar:4.3.4.RELEASE]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:138) ~[spring-context-4.3.4.RELEASE.jar:4.3.4.RELEASE]

AbstractRemotingServer的stop操作预期只能执行一次,实际可以实现两次,与预期行为不符

Describe the bug
可以多次调用AbstractRemotingServer#stop,并没有抛出预期的IllegalStateException

Expected behavior

第一次调用AbstractRemotingServer#stop,执行stop逻辑
第二次调用AbstractRemotingServer#stop,抛出IllegalStateException

Actual behavior

第一次调用AbstractRemotingServer#stop,执行stop逻辑
第二次调用AbstractRemotingServer#stop,再次执行stop逻辑
第三次及之后调用AbstractRemotingServer#stop,抛出IllegalStateException

Steps to reproduce

执行RemotingServerTest#testStopRepeatedly的单元测试

Minimal yet complete reproducer code (or GitHub URL to code)

class AbstractRemotingServer
@Override
    public boolean stop() {
        // 逻辑或在前一个条件为true的情况下,不会之后后面的操作,所以第二次执行stop时started.compareAndSet(true, false)会返回true导致可以再次执行doStop
        if (inited.compareAndSet(true, false) || started.compareAndSet(true, false)) {
            return this.doStop();
        } else {
            throw new IllegalStateException("ERROR: The server has already stopped!");
        }
    }

Environment

  • SOFABolt version: 1.5.0
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:

代码风格问题

Describe the bug
代码风格不符合规范,比如Scannable接口的方法被定义为public;RemotingCommand的方法缺少必要的注释等

Expected behavior

Actual behavior

Steps to reproduce

Minimal yet complete reproducer code (or GitHub URL to code)

Environment

  • SOFABolt version:
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:

demo服务端启动不起来

RpcServerDemoByMain 程序,启动报
Sofa-Middleware-Log SLF4J : Actual binding is of type [ com.alipay.remoting Log4j ]
INFO - Sofa-Middleware-Log SLF4J : Actual binding is of type [ com.alipay.remoting Log4j ]
server start failed!

support control command

@ujjboy 提出版本协商的需求,同时考虑运行时,客户端与服务端需要有控制类信息的交互。
可以统一定义一套控制命令的通信方式来支持此类需求。

无钩提出,需要类似http2的go away帧的指令需求

在并发的对一个Connection进行建连和主动通过close时可能阻塞IO线程

Describe the bug

  1. RunStateRecordedFutureTask#getAfterRun方法可能会阻塞在super.get()上,如果RunStateRecordedFutureTask执行的Callable被阻塞
  2. Callable实际执行的是ConnectionPoolCall,执行建连操作,可能会阻塞在获取netty的建连结果上
  3. 那么可能出现ConnectionPoolCall再等待IO线程建连,而IO线程在等待ConnectionPoolCall的执行结果,出现死锁

Expected behavior

Actual behavior

  • IO线程被阻塞在获取callable的结果上,建连的线程被阻塞在获取建连的结果上

image

image

Steps to reproduce

  • 并发执行主动的建连和close操作,使channelInactive时拿到的task对象为后续建连产生的新的task对象,且这个对象的hasRun状态已经为true但是callable未执行

image

Minimal yet complete reproducer code (or GitHub URL to code)

Environment

  • SOFABolt version: 1.4.x 1.5.x
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:

能否提供一个非RPC协议的junit test

现在线上的项目有使用socket协议, 我方位服务端或客户端的其中一端, 协议由双方共同协定, 在此框架上自定义私有协议, 包括 Command 类型、Command 处理器、编解码处理器 还是挺有难度的, 能否提供自定义协议的单元测试模块, 供参考, 谢谢!

keep the behavior of start and stop method for RemotingServer the same

problem

For now, the stop method of RemotingServer will throw IllegalStateException when do repeat call.
But the start method of RemotingServer do not throw IllegalStateException when do repeat call.

how to fix

So we change the logic for starting RemotingServer, if start failed, we will return false. But if start repeatedly, we will throw IllegalStateException to warn the server have been started already.

项目中生命周期的抽象不统一

Describe the bug

  • RpcClient通过init方法启动,shutdown方法关闭
  • RemotingServer通过start方法启动,stop方法关闭

为什么一个项目中生命周期相关的抽象不统一?

Expected behavior

提供统一的生命周期相关的接口抽象

Actual behavior

Steps to reproduce

Minimal yet complete reproducer code (or GitHub URL to code)

Environment

  • SOFABolt version:
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:

ByteBuf池化开关细节调整

@hongweiyi 反馈

4.1.x之后,netty默认开启了PooledByteBufAllocator,详见release note #PooledByteBufAllocator as the default allocator
bolt之前的系统开关已经无用。需要修改下,改成支持关闭的形式,客户端与server端都需要修改:
由:

     boolean pooledBuffer = SystemProperties.netty_buffer_pooled();
        if (pooledBuffer) {
            this.bootstrap.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
                .childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
        }

改为:

     boolean pooledBuffer = SystemProperties.netty_buffer_pooled();
        if (!pooledBuffer) {
            this.bootstrap.option(ChannelOption.ALLOCATOR, UnpooledByteBufAllocator.DEFAULT)
                .childOption(ChannelOption.ALLOCATOR, UnpooledByteBufAllocator.DEFAULT);
        }

ConnectionFactory中workerGroup非全局静态

Describe the bug
ConnectionFactory中workerGroup非全局静态,这样在创建多个Bolt客户单的场景中,会创建过多的线程。

Expected behavior

Actual behavior

Steps to reproduce

Minimal yet complete reproducer code (or GitHub URL to code)

Environment

  • SOFABolt version:
  • JVM version (e.g. java -version):
  • OS version (e.g. uname -a):
  • Maven version:
  • IDE version:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.