Git Product home page Git Product logo

vivenas's Introduction

ViveNAS

What's ViveNAS

ViveNAS is Network Attached Filesystem(NAS), which provide NFS service currently.

Aim of ViveNAS is to provide a NAS storage that has widely media adaption so we can store long term data at very cheap cost. And when the data is accessed occasionally it can be activated quickly and provide a very high performance.

Characters of ViveNAS:

  • Pursue a dynamic balance between performance and cost via combining different storage media
  • Solve the problems of long term data store, support media like tape, SMR HDD and EC algorithm
  • Get ready for CXL memory pooling and SCM technology, whenever they are ready ViveNAS can leverage them to provide outstanding performance
  • Solve the problem of small file storage
  • Provide a controlled distribution policy for enterprise storage. So it can solve problems like scaling, balancing, recovering, etc.
  • A green storage, that can make a full utilize to resource like RAM, CPU which are over-provisioned in modern datacenter.

ViveNAS dependents on two core technology to provide above ability: Core tech 1,the PureFlash Server SAN system

PureFlash provide all features that are related with distribution system, include HA, fault tolerance, snapshot, clone.

PureFlahs is a distributed ServerSAN storage system with it's design philosophy from fully FPGA implemented all-flash system. So PureFlash has a very simple IO stack.

Differ to other distributed storage system which based on hash algorithm, distribution of data in PureFlash is totally controllable. This provide the stability for enterprise storage, for the "human being" rather than a "machine" have the final decision. (Refer github.com/cocalele/PureFlash for more)

PureFlash support to manage different media in one cluster, include NVMe SSD, HDD, tape and support access as AOF file.

All the above features give a solid support to ViveNAS.

Core tech 2,SLM tree based VIVEFS

ViveFS is a userspace filesystem based on LSM tree. LSM tree have two major characters: store in multiple levels; sequential write only in each level.

ViveFS put level 0 into DRAM or CXL memory pool, while the other levels will be put into different medias provided by PureFlash. All level data are highly available.

The second benefit of LSM tree, i.e. sequential write, make it very suitable for SMR HDD and tape. So ViveNAS can put cold data into cheap media for long term store. This is one of major aim of ViveNAS. Also sequential write is very friendly to EC algorithm, with which can make cost lower.

architecture

+-------------------+
|Ganesha-NAS portal |
+-------------------+
         |
         |
+--------v----------+
|ViveNAS FSAL       |
+-------------------+
         |
         |
+--------v----------+
| LSM K-V (rocksdb) |
+-------------------+
         |
         |
+--------v----------+
| PureFlash (AOF)   |
+-------------------+
         |
         |
+--------v----------+
| Multiple Medias   |
+-------------------+

Build and run

setup build environment from scratch

  1. follow the guides in PureFlash/build_and_run.txt to setup a compile environment for PureFlash
  2. For ubuntu, run following command to install additional dependency:
  # apt install liburcu-dev  bison flex libgflags-dev  libblkid-dev libzstd-dev 
  # to run rocksdb db_bench, also install:
  # apt install time bc

To simplify the compiling process, some thirdparty libraryies are prebuild into binary. For now only ubuntu20.04 is supported.

use the container for build

It may be a bit complicate to setup build environment from scratch since ViveNAS/PureFlash use many third party libraries. I strongly suggest you to use the container for build

# docker pull pureflash/vivenas-dev:1.9

All the dependencies and build tools have already deployed in this container. 2) clone code

  # git clone https://github.com/cocalele/ViveNAS.git
  1. build
  # cd ViveNAS; VIVENAS_HOME=$(pwd)
  # git submodule update --init --recursive
  # cd rocksdb
  # mkdir build; cd build
  # cmake -S .. -GNinja -DCMAKE_BUILD_TYPE=Release -DUSE_RTTI=1 -DWITH_ZSTD=ON -B . -DROCKSDB_PLUGINS=pfaof -DPF_INC=/root/ViveNAS/PureFlash/common/include/ -DPF_LIB=/root/ViveNAS/PureFlash/build/bin
  # ninja
  
  # cd $VIVENAS_HOME/PureFlash/
  # git submodule update --init --recursive
  # mkdir build; cd build
  # cmake -GNinja -DCMAKE_BUILD_TYPE=Debug ..
  # ninja

  # cd $VIVENAS_HOME
  # mkdir build; cd build
  # cmake .. -GNinja -DCMAKE_BUILD_TYPE=Debug
  # ninja
  1. Run run ganesha:
#  mkdir -p /var/lib/nfs/ganesha  /var/run/ganesha /usr/lib/ganesha
# apt install liburcu6
# apt-get install libgflags-dev 
# ln -s /root/v2/ViveNAS/out/build/Linux-GCC-Debug/bin/libfsalvivenas.so /usr/lib/ganesha/libfsalvivenas.so
# export LD_LIBRARY_PATH=/root/v2/nfs-ganesha/src/libntirpc/src:$LD_LIBRARY_PATH
# mkfs.vn /vivenas_a
# LD_PRELOAD=/usr/lib/x86_64-linux-gnu/librdmacm.so.1.2.28.0 ../nfs-ganesha/build/ganesha.nfsd -F  -f ./ganesha-vivenas.conf -L /dev/stderr -p /var/run/ganesha.pid

Performance estimation

accord to rocksdb benchmark, https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks

vivenas's People

Contributors

cocalele avatar glorycloud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

vivenas's Issues

Can't change file/dir mode

change file or dir mode failed, as bellow:

# chmod 777 /mnt/dd/
chmod: changing permissions of '/mnt/dd/': Operation not permitted

MDCache report `RW LOCK :CRIT :Error 35, write locking`

在open2函数调用后,发现报告RW LOCK :CRIT :Error 35, write locking, 而且这个错误是从MDCache代码里报告的,此时所有代码的调用栈都已出了vivenas的代码。当前已进行了如下的尝试:

  • ganesha v4换到v3, 并模仿fsal ceph的代码实现open2

  • 发现ganesha v3自带的FSAL_MEM模块无法正常启动,感觉问题发散了,决定仍切换会ganesha v4.

  • ganesha v4的FSAL_MEM模块能正常使用

  • 拿mem fsal的代码过来,只做最细微的调整
    上面的方法都调不通。因此决定下面的方法

  • 在ganesha的工程里进行改造,基础版本可以运行,只要加上vivenas链接,nfs mount就卡住了,任何错误信息都不出。
    很多次尝试后发现问题在于vivenas.so文件本身是一个fsal plugin, 里面有fsal注册的代码。把fsal代码去掉重新编译so文件。这样FSAL_MEM就可以正常响应mount

Which cache to use for read

There are two kind of cache to use:

  • roacksdb block_cache defined in BlockBasedTableOptions
  • the read_cache defined in PfAofRandomFile and PfAofSeqFile. this cache is of type AofWindowCache provided by PureFlash AOF.

Question is, should we use both of them? or only one and which one?

segfault in write file

Segfault encountered in file mem_write2 because the ViveFile* was NULL in the object handle asked to write.

We have the following structure for ViveNAS open:

struct vn_fd {
	struct state_t state;
	/** The open and share mode etc. This MUST be first in every
	 *  file descriptor structure.
	 */
	fsal_openflags_t openflags;
	struct ViveFile* vf;    //segfault caused by this NULL
};


struct mem_fsal_obj_handle {
		struct {
			struct fsal_share share;
			struct vn_fd fd;  //HOPE: vf in fd should be valid after open. 
		} mh_file;
};

In open2 function, the logic is as following :

fsal_status_t mem_open2(struct fsal_obj_handle *obj_hdl,
			struct state_t *state,
			fsal_openflags_t openflags,
			enum fsal_create_mode createmode,
			const char *name,
			struct fsal_attrlist *attrs_set,
			fsal_verifier_t verifier,
			struct fsal_obj_handle **new_obj,
			struct fsal_attrlist *attrs_out,
			bool *caller_perm_check)
{
	if (state != NULL) {
		my_fd = container_of(state, struct vn_fd, state);
	}

	if (name == NULL) {
		if (state != NULL) {
                          // NOT CHANGE TO my_fd, use state as fd
		} else {
			my_fd = &myself->mh_file.fd;  //use fd take from handle
		}
 	}
	mem_open_my_fd(myself, my_fd, openflags);
 	
}

The open2 function was called with

open file name:(null) myslef:0x7f38c821ea90 myself_name:f2 state:0x7f38c811d4f0

That's to say the state was used as fd and opened, and we not touch the fd in input param obj_handle was not touched.

rocksdb crash on overwrite file

can reproduce by steps:
date > /mnt/f1.txt, where f1.txt is an existed file before mount
umout /mnt
Ctrl-C to stop ganesha, during the last stage of ganesha exit, it crashed.

call stack is:

#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49
#1 0x00007f089b4b2864 in __GI_abort () at abort.c:79
#2 0x00007f089b4b2749 in __assert_fail_base (fmt=0x7f089b63e458 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x7f089926504c "operand_list.size() >= 2", file=0x7f08992
65037 "db/merge_operator.cc", line=33, function=) at assert.c:92
#3 0x00007f089b4c4a96 in __GI___assert_fail (assertion=0x7f089926504c "operand_list.size() >= 2", file=0x7f0899265037 "db/merge_operator.cc", line=33, function=0x7f0899264fa0 "
virtual bool rocksdb::MergeOperator::PartialMergeMulti(const rocksdb::Slice&, const std::dequerocksdb::Slice&, std::string*, rocksdb::Logger*) const") at assert.c:101
#4 0x00007f0898ce3ff2 in rocksdb::MergeOperator::PartialMergeMulti (this=0x563f693d6a50, key=..., operand_list=std::deque with 1 element = {...}, new_value=0x7f08697f5210, logg
er=0x0) at db/merge_operator.cc:33
#5 0x00007f0898ce0456 in rocksdb::MergeHelper::MergeUntil (this=0x7f08697f57f0, iter=0x7f08697f5e40, range_del_agg=0x7f08400019b0, stop_before=0, at_bottom=false, allow_data_in
_errors=false, blob_fetcher=0x0, prefetch_buffers=0x0, c_iter_stats=0x7f08697f6248) at db/merge_helper.cc:394
#6 0x00007f0898af3d20 in rocksdb::CompactionIterator::NextFromInput (this=0x7f08697f5e40) at db/compaction/compaction_iterator.cc:850
#7 0x00007f0898af145e in rocksdb::CompactionIterator::SeekToFirst (this=0x7f08697f5e40) at db/compaction/compaction_iterator.cc:146
#8 0x00007f0898a86ccc in rocksdb::BuildTable (dbname="/vivenas_a", versions=0x563f693ff5a0, db_options=..., tboptions=..., file_options=..., table_cache=0x563f69412120, iter=0x
7f08697f6840, range_del_iters=std::vector of length 0, capacity 0, meta=0x7f08697f7d28, blob_file_additions=0x7f08697f6550, snapshots=std::vector of length 0, capacity 0, earlie
st_write_conflict_snapshot=72057594037927935, snapshot_checker=0x0, paranoid_file_checks=false, internal_stats=0x563f69640470, io_status=0x7f08697f65b0, io_tracer=std::shared_pt
r (use count 33, weak count 0) = {...}, blob_creation_reason=rocksdb::BlobFileCreationReason::kFlush, event_logger=0x563f693de110, job_id=3, io_priority
=rocksdb::Env::IO_HIGH, table_properties=0x7f08697f7a80, write_hint=rocksdb::Env::WLTH_MEDIUM, full_history_ts_low=0x0, blob_callback=0x563f693df130, num_input_entries=0x7f08697
f64c0, memtable_payload_bytes=0x7f08697f64c8, memtable_garbage_bytes=0x7f08697f64d0) at db/builder.cc:201
#9 0x00007f0898c89a14 in rocksdb::FlushJob::WriteLevel0Table (this=0x7f08697f7910) at db/flush_job.cc:897
#10 0x00007f0898c85f6e in rocksdb::FlushJob::Run (this=0x7f08697f7910, prep_tracker=0x563f693defb0, file_meta=0x7f08697f73e0, switched_to_mempurge=0x7f08697f730d) at db/flush_jo
b.cc:265
#11 0x00007f0898bcc47c in rocksdb::DBImpl::FlushMemTableToOutputFile (this=0x563f693dd8c0, cfd=0x563f6963fa40, mutable_cf_options=..., made_progress=0x7f08697f8713, job_context=
0x7f08697f8770, superversion_context=0x7f08400012d0, snapshot_seqs=std::vector of length 0, capacity 0, earliest_write_conflict_snapshot=72057594037927935, snapshot_checker=0x0,
log_buffer=0x7f08697f8970, thread_pri=rocksdb::Env::HIGH) at db/db_impl/db_impl_compaction_flush.cc:232
#12 0x00007f0898bcce76 in rocksdb::DBImpl::FlushMemTablesToOutputFiles (this=0x563f693dd8c0, bg_flush_args=..., made_progress=0x7f08697f8713, job_context=0x7f08697f8770, log_buf
fer=0x7f08697f8970, thread_pri=rocksdb::Env::HIGH) at db/db_impl/db_impl_compaction_flush.cc:362
#13 0x00007f0898bdacc5 in rocksdb::DBImpl::BackgroundFlush (this=0x563f693dd8c0, made_progress=0x7f08697f8713, job_context=0x7f08697f8770, log_buffer=0x7f08697f8970, reason=0x7f08697f8714, thread_pri=rocksdb::Env::HIGH) at db/db_impl/db_impl_compaction_flush.cc:2731
#14 0x00007f0898bdb1fe in rocksdb::DBImpl::BackgroundCallFlush (this=0x563f693dd8c0, thread_pri=rocksdb::Env::HIGH) at db/db_impl/db_impl_compaction_flush.cc:2771
#15 0x00007f0898bda36a in rocksdb::DBImpl::BGWorkFlush (arg=0x563f69438fe0) at db/db_impl/db_impl_compaction_flush.cc:2597
#16 0x00007f0899070ce3 in std::__invoke_impl<void, void (&)(void), void*&> (__f=@0x563f69510210: 0x7f0898bda2e8 rocksdb::DBImpl::BGWorkFlush(void*)) at /usr/include/c++/9/bi
ts/invoke.h:60
#17 0x00007f089907077e in std::__invoke<void (&)(void), void*&> (__fn=@0x563f69510210: 0x7f0898bda2e8 rocksdb::DBImpl::BGWorkFlush(void*)) at /usr/include/c++/9/bits/invoke.
h:95
#18 0x00007f089906fdb5 in std::_Bind<void ((void))(void*)>::__call<void, , 0ul>(std::tuple<>&&, std::_Index_tuple<0ul>) (this=0x563f69510210, __args=...) at /usr/include/c++/9
/functional:400
#19 0x00007f089906ef0b in std::_Bind<void ((void))(void*)>::operator()<, void>() (this=0x563f69510210) at /usr/include/c++/9/functional:484
#20 0x00007f089906dc33 in std::_Function_handler<void (), std::_Bind<void ((void))(void*)> >::_M_invoke(std::_Any_data const&) (__functor=...) at /usr/include/c++/9/bits/std_f
unction.h:300
#21 0x00007f08998787c6 in std::function<void ()>::operator()() const (this=0x7f08697f9440) at /usr/include/c++/9/bits/std_function.h:688
#22 0x00007f0899069fee in rocksdb::ThreadPoolImpl::Impl::BGThread (this=0x563f693a4350, thread_id=1) at util/threadpool_imp.cc:266
#23 0x00007f089906a181 in rocksdb::ThreadPoolImpl::Impl::BGThreadWrapper (arg=0x563f693db2d0) at util/threadpool_imp.cc:307
#24 0x00007f08990716e3 in std::__invoke_impl<void, void ()(void), rocksdb::BGThreadMetadata*> (__f=@0x563f693db490: 0x7f089906a06e <rocksdb::ThreadPoolImpl::Impl::BGThreadWrap
per(void*)>) at /usr/include/c++/9/bits/invoke.h:60
#25 0x00007f089907162f in std::__invoke<void ()(void), rocksdb::BGThreadMetadata*> (__fn=@0x563f693db490: 0x7f089906a06e <rocksdb::ThreadPoolImpl::Impl::BGThreadWrapper(void*)

) at /usr/include/c++/9/bits/invoke.h:95
#26 0x00007f089907157f in std::thread::_Invoker<std::tuple<void ()(void), rocksdb::BGThreadMetadata*> >::_M_invoke<0ul, 1ul> (this=0x563f693db488) at /usr/include/c++/9/thread
:244
#27 0x00007f0899071521 in std::thread::_Invoker<std::tuple<void ()(void), rocksdb::BGThreadMetadata*> >::operator() (this=0x563f693db488) at /usr/include/c++/9/thread:251
#28 0x00007f08990714f2 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void ()(void), rocksdb::BGThreadMetadata*> > >::_M_run (this=0x563f693db480) at /usr/includ
e/c++/9/thread:195
#29 0x00007f0897ad2de4 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#30 0x00007f089b67f590 in start_thread (arg=0x7f08697fa640) at pthread_create.c:463
#31 0x00007f089b5a5223 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Aof magic error

[DEBU 2023-03-26 20:30:42.852]Create event queue:ctrl_q with fd:1582(../common/src/pf_event_queue.cpp:23:init)
[DEBU 2023-03-26 20:30:42.852]Create event queue:vol_proc with fd:1584(../common/src/pf_event_queue.cpp:23:init)
[INFO 2023-03-26 20:30:42.852]Succeeded open volume /vivenas_a/000736.sst@HEAD(0x307000000), meta_ver=0, io_depth=32(../common/src/pf_aof.cpp:239:pf_open_aof)
[INFO 2023-03-26 20:30:42.853]Connecting to 127.0.0.1:49162(../common/src/pf_tcp_connection.cpp:555:connect_to_server)
[INFO 2023-03-26 20:30:42.853]waiting connect to 127.0.0.1:49162(../common/src/pf_tcp_connection.cpp:567:connect_to_server)
[INFO 2023-03-26 20:30:42.853]TCP connect success:127.0.0.1(../common/src/pf_tcp_connection.cpp:577:connect_to_server)
[DEBU 2023-03-26 20:30:42.853]Handshake complete, send iodepth:32, receive iodepth:32(../common/src/pf_tcp_connection.cpp:632:connect_to_server)
[DEBU 2023-03-26 20:30:42.853]Create event queue:net_send_q with fd:1586(../common/src/pf_event_queue.cpp:23:init)
[DEBU 2023-03-26 20:30:42.853]Create event queue:net_recv_q with fd:1587(../common/src/pf_event_queue.cpp:23:init)
[ERRO 2023-03-26 20:30:42.853]Aof magic error, not a AoF file. rc:4096 volume:(../common/src/pf_aof.cpp:263:open)
[DEBU 2023-03-26 20:30:42.853]close aof:/vivenas_a/000736.sst len:0(../common/src/pf_aof.cpp:80:~PfAof)

Need a Vector Slice

Currently in write operation, we must copy data and extent head into a temporary buffer to form a slice. We could avoid this if there's a vector-slice like iov.
So we need to implement a vector slice so we can combine extent_head and data together without memcpy

High availability (HA) solution

Currently ViveNAS is not 100% HA service. The backend store PureFlash is HA, but the front end interface libvivefs and rocksdb is not. So it may need long time to read WAL to restore service after a node failure or power outage.

A possible solution to this is, place all the memory table and states into a pooled/shareable memory device. When the primary node is donw, the slave node can take over all the memory of primary and continue to provide service.

I think it's not a myth. CXL technology is making memory pooling a reality. ViveNAS will be the first service benifit from CXL.

flush all data on server process killed by Ctrl-C

_base.h:757
#1  std::atomic<void*>::exchange (this=0x5581f98fb999, __p=0x7fe018cad360 <rocksdb::SuperVersion::dummy>, __m=std::memory_order_acquire) at /usr/include/c++/9/atomic:528
#2  0x00007fe0186472b0 in rocksdb::ThreadLocalPtr::StaticMeta::Swap (this=0x5584a1c5c210, id=2, ptr=0x7fe018cad360 <rocksdb::SuperVersion::dummy>) at util/thread_local.cc:424
#3  0x00007fe018647b4c in rocksdb::ThreadLocalPtr::Swap (this=0x5584a1ccafe0, ptr=0x7fe018cad360 <rocksdb::SuperVersion::dummy>) at util/thread_local.cc:539
#4  0x00007fe0180a6c82 in rocksdb::ColumnFamilyData::GetThreadLocalSuperVersion (this=0x5584a1ce5690, db=0x5584a1c95080) at db/column_family.cc:1204
#5  0x00007fe018161b21 in rocksdb::DBImpl::GetAndRefSuperVersion (this=0x5584a1c95080, cfd=0x5584a1ce5690) at db/db_impl/db_impl.cc:3453
#6  0x00007fe018159cf2 in rocksdb::DBImpl::GetImpl (this=0x5584a1c95080, read_options=..., key=..., get_impl_options=...) at db/db_impl/db_impl.cc:1781
#7  0x00007fe0181598b7 in rocksdb::DBImpl::Get (this=0x5584a1c95080, read_options=..., column_family=0x5584a1cc76b0, key=..., value=0x7ffd55ab7cf0, timestamp=0x0) at db/db_impl/
db_impl.cc:1727
#8  0x00007fe0181597bd in rocksdb::DBImpl::Get (this=0x5584a1c95080, read_options=..., column_family=0x5584a1cc76b0, key=..., value=0x7ffd55ab7cf0) at db/db_impl/db_impl.cc:1717
#9  0x00007fe01868ee59 in rocksdb::StackableDB::Get (this=0x5584a1cc3560, options=..., column_family=0x5584a1cc76b0, key=..., value=0x7ffd55ab7cf0) at ./include/rocksdb/utilitie
s/stackable_db.h:88
#10 0x00007fe018e541b8 in vn_umount (ctx=0x5584a1c69eb0) at ../../../src/main.cpp:178
#11 0x00007fe018f416ab in finish () at ../../../nfs-ganesha/src/FSAL/FSAL_VIVENAS/vn_main.c:293
#12 0x00007fe01aed21a3 in ?? () from /lib64/ld-linux-x86-64.so.2
#13 0x00007fe01aaafa57 in __run_exit_handlers (status=0, listp=0x7fe01ac4e738 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at exit
.c:108
#14 0x00007fe01aaafc00 in __GI_exit (status=<optimized out>) at exit.c:139
#15 0x00007fe018e54931 in sigroutine (signo=2) at ../../../src/main.cpp:251
#16 <signal handler called>
#17 0x00007fe01ac5fb75 in __pthread_clockjoin_ex (threadid=140598254212672, thread_return=0x0, clockid=<optimized out>, abstime=<optimized out>, block=<optimized out>) at pthrea
d_join_common.c:145
#18 0x00007fe01acad3ca in nfs_start (p_start_info=0x5584a1558010 <my_nfs_start_info>) at /root/v2/nfs-ganesha/src/MainNFSD/nfs_init.c:949
#19 0x00005584a1554835 in main (argc=6, argv=0x7ffd55ab88b8) at /root/v2/nfs-ganesha/src/MainNFSD/nfs_main.c:523
(gdb) f 11
#11 0x00007fe018f416ab in finish () at ../../../nfs-ganesha/src/FSAL/FSAL_VIVENAS/vn_main.c:293
293                     vn_umount(myself->mount_ctx);

Segmentation fault in vn_readdir

vn_readdir在第一次调用时,就传入了参数wherence != NULL, 这样vn_readdir会以为这不是首次调用,并把wherence转换成vn_inode_iterator* 来使用。

尝试了ganesha V4.0.5和V3版本,都是一样的问题。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.