Git Product home page Git Product logo

glow's Introduction

glow

Build Status GoDoc

Purpose

Glow is providing a library to easily compute in parallel threads or distributed to clusters of machines. This is written in pure Go.

I am also working on another pure-Go system, https://github.com/chrislusf/gleam , which is more flexible and more performant.

Installation

$ go get github.com/chrislusf/glow
$ go get github.com/chrislusf/glow/flow

One minute tutorial

Simple Start

Here is a simple full example:

package main

import (
	"flag"
	"strings"

	"github.com/chrislusf/glow/flow"
)

func main() {
	flag.Parse()

	flow.New().TextFile(
		"/etc/passwd", 3,
	).Filter(func(line string) bool {
		return !strings.HasPrefix(line, "#")
	}).Map(func(line string, ch chan string) {
		for _, token := range strings.Split(line, ":") {
			ch <- token
		}
	}).Map(func(key string) int {
		return 1
	}).Reduce(func(x int, y int) int {
		return x + y
	}).Map(func(x int) {
		println("count:", x)
	}).Run()
}

Try it.

  $ ./word_count

It will run the input text file, '/etc/passwd', in 3 go routines, filter/map/map, and then reduced to one number in one goroutine (not exactly one goroutine, but let's skip the details for now.) and print it out.

This is useful already, saving lots of idiomatic but repetitive code on channels, sync wait, etc, to fully utilize more CPU cores.

However, there is one more thing! It can run across a Glow cluster, which can be run multiple servers/racks/data centers!

Scale it out

To setup the Glow cluster, we do not need experts on Zookeeper/HDFS/Mesos/YARN etc. Just build or download one binary file.

Setup the cluster

  # Fetch and install via go, or just download it from somewhere.
  $ go get github.com/chrislusf/glow
  # Run a script from the root directory of the repo to start a test cluster.
  $ etc/start_local_glow_cluster.sh

Glow Master and Glow Agent run very efficiently. They take about 6.5MB and 5.5MB memory respectively in my environments. I would recommend set up agents on any server you can find. You can tap into the computing power whenever you need to.

Start the driver program

To leap from one computer to clusters of computers, add this line to the import list:

	_ "github.com/chrislusf/glow/driver"

And put this line as the first statement in the main() function:

	flag.Parse()

This will "steroidize" the code to run in cluster mode!

$ ./word_count -glow -glow.leader="localhost:8930"

The word_count program will become a driver program, dividing the execution into a directed acyclic graph(DAG), and send tasks to agents.

Visualize the flow

To understand how each executor works, you can visualize the flow by generating a dot file of the flow, and render it to png file via "dot" command provided from graphviz.

$ ./word_count -glow -glow.flow.plot > x.dot
$ dot -Tpng -otestSelfJoin.png x.dot

Glow Hello World Execution Plan

Read More

  1. Wiki page: https://github.com/chrislusf/glow/wiki
  2. Mailing list: https://groups.google.com/forum/#!forum/glow-user-discussion
  3. Examples: https://github.com/chrislusf/glow/tree/master/examples/

Docker container

Docker is not required. But if you like docker, here are instructions.

# Cross compile artefact for docker
$ GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build .
# build container
$ docker build -t glow .

See examples/ directory for docker-compose setups.

Contribution

Start using it! And report or fix any issue you have seen, add any feature you want.

Fork it, code it, and send pull requests. Better first discuss about the feature you want on the mailing list. https://groups.google.com/forum/#!forum/glow-user-discussion

License

http://www.apache.org/licenses/LICENSE-2.0

glow's People

Contributors

alessar avatar alpe avatar ariefdarmawan avatar chrislusf avatar gagliardetto avatar haraldnordgren avatar johansundell avatar justicezyx avatar radike avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glow's Issues

Has it been used in the commercial production environment so far?

I am interesting in this project. The project is surprising. It turns out that golang has a similar implementation similar to Hadoop and Flink. Has been used in the commercial production environment so far? Is there a company using supporting and developing it? As we known๏ผŒHadoop and Flink are very mature. Do I want to know that the Glow can provide the same stable function? Or this project is a toy project based your interest๏ผŸ

Integrate glow with Kubernetes

Tl,dr, the cluster management functionality can be built to work natively using Kubernetes' APIs.

Glow framework is extended to manage its jobs through Kubernetes' APIs. A new logical working unit, controller, is introduced, which manage master/mapper/reducer jobs.

A glow application is built as a standalone binary. The user specifies the parameters, like how many mappers/reducers, the input/output, etc. The user then start the application on Kubernetes with Kubernetes' own basic client tool. When the application starts, it changes its role to controller, and start the other jobs based on user configured parameters.

The benefits would be that Glow can be made as a standalone batch job that work natively on top of Kubernetes. When to run the glow application, it would need to be built as a Docker image. And can be started and terminated on demand, eliminating the cost of maintaining a dedicated cluster.

centos6.4 word count in Distributed Mode run error

////////////////////////////Distributed Mode Conf////////////////////////////
./glow master
./glow agent --dir data --max.executors=16 --memory=2048 --master="localhost:8930" --port 8931
./glow agent --dir data1 --max.executors=16 --memory=2048 --master="localhost:8930" --port 8932
go run word_count.go -glow -glow.leader="localhost:8930" -glow.related.files="passwd"

///////////////////////////////word_count.go/////////////////////////////////////////
`package main

import (
"flag"
"fmt"
"strings"
"strconv"
"time"
"sync"
"encoding/gob"
_ "github.com/chrislusf/glow/driver"
"github.com/chrislusf/glow/flow"
)

type WordCountResult struct {
Addr string
Info MemInfo
}

type MemInfo struct {
Addr string
Size int
Count int
}

func init() {
gob.Register(MemInfo{})
}

func goStart(wg *sync.WaitGroup, fn func()) {
wg.Add(1)
go func() {
defer wg.Done()
fn()
}()
}

func testWordCount1() {

println("testWordCount1")
flowOut1 := make(chan WordCountResult)  
f1       := flow.New()  
f1.TextFile(
    "passwd", 2,
).Map(func(line string, ch chan MemInfo) {
    words:=strings.Split(line, ":")
    if s, err := strconv.ParseInt(words[1], 16, 0); err == nil {
        ch <- MemInfo{words[0], int(s), 1}
    }       
}).Map(func(ws MemInfo) (string, MemInfo) {
    return ws.Addr, ws
}).ReduceByKey(func(x MemInfo, y MemInfo) (MemInfo) {
    return MemInfo{x.Addr,x.Size+y.Size,x.Count+y.Count}
}).AddOutput(flowOut1)

flow.Ready()

startTime := time.Now().UnixNano()
var wg sync.WaitGroup
goStart(&wg, func() {
    f1.Run()
})

goStart(&wg, func() {
    for t := range flowOut1 {
        fmt.Printf("%s size:%-8d count:%-8d\n",
            t.Info.Addr,t.Info.Size,t.Info.Count)
    }
})  

wg.Wait()

endTime := time.Now().UnixNano()
fmt.Printf("UseTime:%d\n",(endTime - startTime) / 1000000)

}

func main() {
flag.Parse()
testWordCount1()
}`

/////////////////////////////////console output///////////////////////
[::1]:8931>2016/02/28 10:34:06 receive error:read tcp [::1]:46225->[::1]:8931: read: connection reset by peer

/////////////////////////////////passwd///////////////////////////////////
0x001b8aa0:00000012
0x001b8aa0:00000012
0x001b8aa0:00000020
0x001b8aa0:00000012
0x001b8aa0:00000400
0x001b8aa0:00000096
0x001b8aa0:00000064
0x001b8aa0:00000012
0x001b8aa0:00000020
0x001b8aa0:00000008
0x001b8aa0:00000012
0x001b8aa0:00000020
0x001b8aa0:00000016
0x001b8aa0:00000012
0x001b8aa0:00000020
0x001b8aa0:00000016
0x001b8aa0:00000021
0x76fb9640:00000008
0x001b8aa0:00000020
0x001b8aa0:00000020
0x001b8aa0:00000020
0x001b8aa0:00000032
0x001b8aa0:00000016
0x001b8aa0:00000012
0x001b8aa0:00000020
0x001b8ab8:00000512
0x76fb9640:00000008
0x76fb9640:00000008
0x76fb9640:00000008
0x76fb9640:00000008
0x7688b540:00000057
0x76fb9640:00000008
.......

http server address and port should be configurable

In https://github.com/chrislusf/glow/blob/master/driver/rsync/http_server.go#L91 is a random port selected which makes it very hard when access is restricted via firewalls or in a container environment where ports are exposed.
The remote address which is used with the heartbeat can be a local one. Then it's impossible for any client outside to connect. I had these problems when I tried to connect to a local docker cluster from the host system.

Windows: undefined: syscall.SIGINFO (reopened)

Hi,

I'm reopening this issue as glow/flow didn't install properly this time neither. A regression might be introduced.

As glow was installed (first step go get github.com/chrislusf/glow was successful), I tried installing only flow ( go get github.com/chrislusf/glow/flow ), but this produced the same result as before.

Then, I manually uninstalled glow and reinstalled it with success. But this time, the second step yields:

> go get github.com/chrislusf/glow/flow
# github.com/chrislusf/glow/flow
C:\Go\src\github.com\chrislusf\glow\flow\signal_handling.go:21: undefined: syscall.SIGINFO
C:\Go\src\github.com\chrislusf\glow\flow\signal_handling.go:29: undefined: syscall.SIGINFO
C:\Go\src\github.com\chrislusf\glow\flow\signal_handling_windows.go:11: OnInterrupt redeclared in this block
C:\Go\src\github.com\chrislusf\glow\flow\signal_handling_windows.go:26: OnInterrupt.func1 redeclared in this block
previous declaration at C:\Go\src\github.com\chrislusf\glow\flow\signal_handling.go:26
previous declaration at C:\Go\src\github.com\chrislusf\glow\flow\signal_handling.go:11
C:\Go\src\github.com\chrislusf\glow\flow\signal_handling_windows.go:29: undefined: syscall.SIGINFO

Hope this is useful.

Is there a means of teeing the flow?

This isn't a great example but I'm looking for a way to do something like this:

stream := flow.New().Source(source, parts).Map(mapper)
flow_a, flow_b := stream.Tee() // <-- this is what i really want
go flow_a.Filter(a_filter).
  Reduce(a_reduce).
  Run()
flow_a.Filter(b_filter).
  Reduce(b_reduce).
  Run()

basically, I want to be able to filter the same stream multiple ways without iterating the stream multiple times.

question about Channel->map->map->ReduceByKey->AddOutput

////////////////////////Question/////////////////////////////
blocking and can't print "println("reduce:")"

////////////////////////////Source Code///////////////////////

package main

import (
"flag"
"fmt"
"strings"
"strconv"
"sync"
_"os"
_"bufio"
_"io"
"io/ioutil"
"encoding/gob"
_ "github.com/chrislusf/glow/driver"
"github.com/chrislusf/glow/flow"
)

type WordSentence struct {
Word string
LineNumber int
}

type AccessByAgeGroup struct {
Addr string
Info MemInfo
}

type MemInfo struct {
Addr string
Size int
Count int
}

func init() {
gob.Register(MemInfo{})
}

func goStart(wg *sync.WaitGroup, fn func()) {
wg.Add(1)
go func() {
defer wg.Done()
fn()
}()
}

func testWordCount2() {

println("testWordCount2")
flowOut2 := make(chan AccessByAgeGroup)
chIn     := make(chan string)
f2       := flow.New()

f2.Channel(chIn).Partition(1).Map(func(line string) MemInfo {
    //println(line)
    words:=strings.Split(line, ":")
    //println(words[0]+" "+words[1])
    s, _ := strconv.ParseInt(words[1], 16, 0)
    return MemInfo{words[0], int(s), 1}
}).Map(func(ws MemInfo) (string, MemInfo) {
    println(ws.Addr)
    return ws.Addr, ws
}).ReduceByKey(func(x MemInfo, y MemInfo) (MemInfo) {
    println("reduce:")
    return MemInfo{x.Addr,x.Size+y.Size,x.Count+y.Count}
}).AddOutput(flowOut2)

flow.Ready()

var wg sync.WaitGroup

goStart(&wg, func() {
    f2.Run()
})

goStart(&wg, func() {
    for t := range flowOut2 {
        fmt.Printf("%s size:%-8d count:%-8d\n",
            t.Info.Addr,t.Info.Size,t.Info.Count)
    }
})

bytes, err := ioutil.ReadFile("passwd")
if err != nil {
    println("Failed to read")
    return
}

lines := strings.Split(string(bytes), "\r\n")
for _, line := range lines {
    chIn <-line
}

wg.Wait()

}

func main() {
flag.Parse()
testWordCount2()
}

[future] multi masters to avoid SPOF

Master only has soft states collected from agents. We can use Raft protocol to dynamically elect leaders from a cluster of masters, similar to what I did in github.com/chrislusf/seaweedfs

But let's hold it for now. Just restart your master server if it ever fails.

Example to run on Google compute engine

Glow should be very suitable to run on the cloud. It'll be nice to see that we can run a simple driver program and invoke the google compute engine.

Some benefits I can see:

  1. the ingress data transportation is free.
  2. GCE's cost is per second. Glow is very fast to start and stop.

Suggestions:

  1. Starting a globally accessible glow master may make it easier.
  2. Glow just need local SSD.

Manage Dependencies

go get ./... won't give you reproducible builds. A section in the Readme would be helpful.

cluster auth support

would you please add support auth while connect the remote cluster ip:port
for example. may like this

--auth-key=si4C823CD$72^B512

glow run block when read big file data to mysql

flow.New().Source(func(chVipDataGlow chan [][]string) {
	defer close(chVipDataGlow)
	file, err := os.Open(fileName)
	fileInfo, err := os.Stat(fileName)
	if err != nil || fileInfo.Size() == 0 {
		return
	}
	defer func() {
		file.Close()
		os.Remove(fileName)
	}()
	if err != nil {
		log.Info(cc).Msgf("ไผไธš%d, ๆ‰“ๅผ€SFTPๆ–‡ไปถๅคฑ่ดฅ: %v ่ทฏๅพ„ๆ˜ฏ:%s", enterpriseId, err, fileName)
		return
	}
	buf := bufio.NewReader(file)
	num := 2000
	// var tmp = make([][]string, 0, num
	var tmp [][]string
	var i = 0
	for {
		b, err := buf.ReadString('\n')
		if err != nil && err != io.EOF {
			log.Info(cc).Msgf("ๆ•ฐๆฎ่ฏปๅ–ๆŠฅ้”™ไบ†%+v\n", err)
			break
		}
		i++
		cell := strings.Split(b, ",")
		if i == 1 && cell[0] == "ๆ•ฐๆฎๆ›ดๆ–ฐๆ—ถ้—ด" {
			continue
		}
		if len(cell) <= 1 {
			fileCountNumbers += 1
			continue
		}
		tmp = append(tmp, cell)
		if len(tmp) == num {
			fileCountNumbers += num
			chVipDataGlow <- tmp
			tmp = [][]string{}
		}
		if err != nil || err == io.EOF {
			log.Info(cc).Msgf("ๆ•ฐๆฎ่ฏปๅฎŒไบ†ๆˆ–่€…ๆŠฅ้”™ไบ†%+v", err)
			break
		}
	}
	if len(tmp) > 0 {
		fileCountNumbers += len(tmp)
		chVipDataGlow <- tmp
		tmp = [][]string{}
	}
}, 10).Map(func(params [][]string) [][]string {
	defer func() {
		if errs := recover(); errs != nil {
			log.Error(cc).Msgf("ๆ•ฐๆฎๅบ“ๆ’ๅ…ฅๆ‰ง่กŒๅผ‚ๅธธไบ†%+v", errs)
			errDefer = append(errDefer, params...)
			return
		}
	}()
	if len(params) > 0 {
		sucn, ern, _, errData := c.readDataToTableOneByOne(cc, params, enterpriseId)
		errNums += ern
		successNums += sucn
		return errData
	}
	return params
}).Map(func(errItems [][]string) {
        fmt.println(errItems)
}).Run()

11
2
3

[doc] how to write dynamic flows

I will put some piece of text here before finalizing it.

Go lacks a feature Java has: to send byte code to remote servers and execute. So how could a user write dynamic code? For example, loop until a condition is satisfied.

In glow, one program can have a list of flows. Each flow is dynamically allocated to servers. So you can run one flow, get the results, and start another one.

The list of flows must be consistent on both the driver and the task group executors. So they can not be dynamically defined. All the flows must be statically prepared, but dynamically invoked.

Ideally, just define the flow in the init() section, in one or multiple files.

var f1 *flow.FlowContext
func init(){
  f1:= flow.New()
  f1.Filter(...).Map(...).Sort()
}

And during normal run time:

func main(){
  // put any complicated logic here, if, else, for loop, etc
  for some_condition {
     f1.Run()
  }
}

TODO: Add notes about how a driver program works, head to toe.

Let's say you have a "hello_world.go" file, and compiled into "hello_word" binary file.

Preparing the execution plan
When "hello_word -glow" starts, it runs locally on your computer (duh...), and it will try to run a flow. It generates an optimized execution plan by merging steps, and divides the steps into task groups based on the number of partitions.

Fetch resources from master
One task group needs to run on one executor. So the driver program will ask master for available executors.

However the master may not always give the exact number of executors that the driver is asking for, due to resource limitation, competition with other driver programs, etc. But since all communicate cross servers are pull-based and asynchronous, glow is good to run with limited resources. In extreme case, a flow can proceed even with just 1 single executor.

Assign tasks to a server
The driver program will assign a group of tasks to an agent that manages the resources. The agent then replicates the driver binary file if not cached already, and starts to run the binary file in "task" mode, by specifying "-glow.flow.id", "-glow.taskGroup.id", etc. The driver also tells the task group about the input dataset shard locations.

Task group runs on the server
Since all flows are statically defined, the tasks to run are deterministic given flow.id and taskGroup.id, and input dataset shard locations. The output will be written to local agent. The inputs will be retrieved from the location served by the remote agents.

Input Channels and Output Channels
A flow can also have input channels and output channels, where the driver program will send and receive data. Ideally you should not pump a lot of data through these channels, but just data locations or succinct final results.

undefined: signal.Ignore

Signal Handling Linux

I am recieving an error that seems to be OS specific. I haven't done very much reading, but I hope you can help.

Screen Shots

Error

Error I get when I get/install the packages.
screen shot 2016-03-15 at 6 49 22 pm

Operating system information

screen shot 2016-03-15 at 6 51 07 pm

Thank you for the work that you are doing. This is going to be very helpful to my clustering project!

S

[feature] rsync executables to agents

Currently the "rsync" folder is not using rsync. I attempted, but later just used normal http copy. I would like someone to help me to use real rsync.

A better idea may be using bittorrent DHT to distribute the binary file.

Doing partial reduceByKey in Flow created in func init()

Hello.

I apologize if I have missed something obvious, but I am using glow to map and reduce time series. I would like to do a reduce or reduceByKey on every time slices (for instance, reduceByKey on all events received in the last minute.

Right now, I am setting the code up to be distributed and, following the tutorial, have put my flows in the func init() section so that they are "statically -- (instantiated only once the same way)" on every nodes.

The data is coming from an unlimited stream (i.e, not from a bounded file). So I have something like this:

func init() {
	mapRecordsToMetadata.
		Channel(mapRecordsToMetadataInput).
		Map(mapTimeSeriesToTimeSliceFunc).
		Map(mapConvertColumnValuesFunc).
	        
                // ... some more maps and filters
                
                ReduceByKey(reduceByFlowKey).
		AddOutput(mapRecordsToMetadataOutput)
}

// letsDoIt uses mapRecordsToMetadata to map and reduces all events for a given key during a time slice
func letsDoIt(streamedEvents chan []string) chan GroupedEventsByKeyChan{

  out := make (chan GroupedEventsByKeyChan)
  go func() {
      for evt := range streamedEvents {
          mapRecordsToMetadataInput <- evt
     }
  }()

  go func() {
      for evt := range mapRecordsToMetadataOutput {
          out <- evt
      }
  }()

 return out
}

I have simplified a bit, but hopefully this is enough to get the idea. Now, reduceByKey is blocking until I close mapRecordsToMetadataInput input channel (makes sense). However, if I do this, I can't really use my flow mapRecordsToMetadata anymore (is there a way to replace the input channel and restart it?).

Conceptually, I would "close" my input flow (mapRecordsToMetadataInput every "time slices" where I want the aggregate to run (i.e every 30 seconds) so that my reduceByKey would run on that intervals of inputs.

My only option seems to make the "map" operations in the init() section (i.e mapRecordsToMetadataInput) and the reduceByKey() operation in a dynamic flow, recreating the dynamic flow every 30 seconds in my case.

Something like this:

func init() {
	mapRecordsToMetadata.
		Channel(mapRecordsToMetadataInput).
		Map(mapTimeSeriesToTimeSliceFunc).
		Map(mapConvertColumnValuesFunc).
	        
                // ... some more maps and filters
                // Removed the Reduce By Key 
		AddOutput(mapRecordsToMetadataOutput)
}

func letsDoIt(streamedEvents chan []string) chan GroupedEventsByKeyChan{

  out := make (chan GroupedEventsByKeyChan)
  go func() {
      for evt := range streamedEvents {
          mapRecordsToMetadataInput <- evt
     }
  }()

  go func() {
      nextInterval := time.Now().Add(30 * time.SECOND)
      for {
         
         reduceFlow := flow.New()
         reduceInChan := make(chan EventsByKeychan)
         reduceFlow.
            Channel(reduceInChan).
            ReduceByKey(reduceByFlowKey).
            AddOutput(out)

         for evt := range mapRecordsToMetadataOutput {
            reduceInChan  <- evt

            if (evt.Time.After(nextInterval) {
                //flush and reduce for that interval
                close(ReduceInChan)
                nextInterval := nextInterval.Add(30 * time.SECOND)
            }
         }
      }
  }()

 return out
}

Is this the "right" canonical way to proceed? Does that scale? Or are we missing a small feature that would allow to "flush" our static flows at fixed intervals or on demand so that we can operate on streaming use cases in a more streamline fashion?

Add unit tests for moderately complex APIs across the code base

The lack of unit tests greatly slows down the pace to understand, discuss, and modify the code. I think the lack of unit tests blocks my work of integrating Glow with Kubernets: I have little idea of how code suppose to work, and there is virtually no check on the correctness of changes. It also greatly increase the effort to understand the code base.

I think it makes sense to add unit tests to cover the relatively obvious/common use cases, which should be able to finish quickly and brings a lot of benefits.

I plan to go through the code base, and add test along the way. Hopefully it also accelerate my rate of ramping up with the code base.

Any thoughts, comments, or objections?

Fold operation

Is there any way to implement a fold operation with the current dataset methods? If not, is there a plan to add it? I'd be willing to give it a shot.

Issues at start_local_glow_cluster.sh

etc/start_local_glow_cluster.sh

glow master --address="${MASTER_ADDRESS}" &>/dev/null &
each lines should be like
./glow master --address="${MASTER_ADDRESS}"

  1. It allows to run glow application
  2. It show errors at startup or logs

idea: grep in go

https://sift-tool.org/index

this is a replacement for grep, and seems like it could be a nice match for glow in some ways.
its high perf and nicely extensible.

i kind of see it as a useful tool when you need to have glow orchestrate everything, and you need to sift through lots of files

curious what you think.

document failure/retry modes in distributed use

Glow looks really elegant and simple to get started with and cool!

As a new potential user, my biggest question is how error handling and re-try (if any) works in distributed mode. Specifically, I was hoping to find either "at most once" or "at least once" delivery referenced in the documentation. (I've browsed the wiki, readme, and skimmed some of the godoc so far -- I might've missed it, but if so, consider this a request for a more central link to the info :) )

I'm not sure if the "at-[most|least]-once" delivery terminology is universal, but here's a description if I'm speaking gobbygook: http://bravenewgeek.com/you-cannot-have-exactly-once-delivery/

Based on the functional intentions of glow, I'm guessing it's at-least-once delivery semantics, but I'd really like to know for sure -- it's important in case I want to call other functions that may have side effects (such functions would have to be carefully designed to be idempotent, obviously).

any plan for hive like execution engine?

is there any plan for hive like execution engine? containing features like:

  1. heterogeneous datasource joining and querying.
  2. Parses and runs SQL query in mapreduce in glow.

Installation steps

There's no "go get" command to show how to install, but I guessed with:

go get -u github.com/chrislusf/glow

which returned no errors, but when I did a go run on the first sample in README.md, I get this:

$ go run glow1.go 
../src/github.com/chrislusf/glow/flow/dataset_sort.go:7:2: cannot find package "github.com/psilva261/timsort" in any of:
    /usr/local/go/src/github.com/psilva261/timsort (from $GOROOT)
    /home/cecil/Workspace/src/github.com/psilva261/timsort (from $GOPATH)
$ 

Looks like there may be dependencies? Could you update the README to provide install instructions?

Consider reduce the number of Travis CI builds

Right now, the Travis CI builds for 12 environments, based a combination of OS and Go versions. The config shows:

os:

  • linux
  • osx
  • windows
  • freebsd

go:

  • 1.5
  • 1.6
  • tip

Maybe we can reduce the build combinations to only for Linux, and only build on the previous major go version + tip; that reduces the builds from 12 to 2.

The benefit is that the build time will be reduce dramatically. The down side is that we lose the testing coverage on some OSes. IMHO, this trade-off favors reduced test time; mainly because the extra OSes does not seem have much presence for Glow's intended use cases, i.e., parallel batch processing.

WDYT?

Windows: undefined: syscall.SIGINFO

Hi,

I've tried to install glow on a go1.5.2 windows-7/amd64. I get this error when getting flow

> go get github.com/chrislusf/glow/flow
# github.com/chrislusf/glow/flow
D:\Go\Code\src\github.com\chrislusf\glow\flow\utils.go:51: undefined: syscall.SIGINFO
D:\Go\Code\src\github.com\chrislusf\glow\flow\utils.go:59: undefined: syscall.SIGINFO

I haven't found anything special on syscall and Windows yet (https://github.com/golang/sys/blob/master/windows/syscall_windows.go ?) but I'm really ignorant about this topics.

Thanks

All the work is done by only 1 node

I started 3 agents, but only the third agent worked. Is there any wrong with my code? And I wonder why there are some empty outputs at the end of log outputs. Thanks for help~

package main

import (
	"flag"
	"github.com/chrislusf/glow/flow"
	_ "github.com/chrislusf/glow/driver"
	"log"
	"os"
)

func main() {
	flag.Parse()
	cnt := 1 << 6
	tasks := make([]int, cnt)
	for i := range tasks {
		tasks[i] = 1 << 32
	}
	log.Println("Begin", Hostname())
	flow.New().Slice(tasks).Partition(cnt).Map(func(tot int) {
		i := 0
		for ; i < tot; i++ {}
		log.Println(i, Hostname())
	}).Run()
	log.Println("End", Hostname())
}

func Hostname() string {
	name, err := os.Hostname()
	if err != nil {
		log.Fatalf("Hostname: %v\n", err)
	}
	return name
}

Outputs:

2017/03/02 23:13:33 Begin admin
2017/03/02 23:13:33 localhost:8930 allocated 1 executors.
2017/03/02 23:13:33 localhost:8930 allocated 1 executors.
2017/03/02 23:13:33 localhost:8930 allocated 6 executors.
2017/03/02 23:13:33 localhost:8930 allocated 57 executors.
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.1:8931>2017/03/02 23:12:25 Begin node1
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.1:8931>2017/03/02 23:12:25 End node1
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 Begin node3
10.10.10.3:8931>2017/03/02 23:12:25 End node3
10.10.10.3:8931>2017/03/02 23:12:27 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:28 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:30 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:31 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:33 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:34 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:36 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:38 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:39 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:41 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:42 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:44 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:46 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:47 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:49 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:50 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:52 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:53 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:55 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:56 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:58 4294967296 node3
10.10.10.3:8931>2017/03/02 23:12:59 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:01 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:02 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:04 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:06 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:07 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:09 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:10 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:12 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:13 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:15 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:16 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:18 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:19 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:21 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:23 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:24 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:26 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:27 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:29 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:30 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:32 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:33 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:35 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:37 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:38 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:40 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:41 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:43 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:44 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:46 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:47 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:49 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:50 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:52 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:54 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:55 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:57 4294967296 node3
10.10.10.3:8931>2017/03/02 23:13:58 4294967296 node3
10.10.10.3:8931>2017/03/02 23:14:00 4294967296 node3
10.10.10.3:8931>2017/03/02 23:14:01 4294967296 node3
10.10.10.3:8931>2017/03/02 23:14:03 4294967296 node3
10.10.10.3:8931>2017/03/02 23:14:04 4294967296 node3
10.10.10.3:8931>2017/03/02 23:14:04 End node3
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
10.10.10.3:8931	
2017/03/02 23:15:12 End admin

any ideas to add Lua(LuaJIT)?

it's quite fast (already using at high load systems).

It would nice to have:

  1. define flow dynamically
  2. use lua code as map/reduce/filter and etc.

Read size invalid argument - expected data input?

I have Glow running on 1 machine just fine, but when trying to simulate the glow cluster system on my local machine, via:
glow master --address 0.0.0.0:8930
glow agent --dir="/Users/andrew/Desktop/GlowFolder" --port=8931 --master="0.0.0.0:8930" --memory=4096 --clean.restart --cpu.level=4

And start the app via:
myapp -glow -glow.leader="0.0.0.0:8930"

  1. If I don't have my executable in the Desktop/GlowFolder, i get an issue saying Failed to start command ./myapp under /Users/andrewt/Desktop/GlowFolder: fork/exec ./myapp: no such file or directory
    I thought the --dir flag was just for temp documents, do I need to copy the app binary to that folder as well?

  2. Read size:
    If I run from a folder containing myapp's binary, then I can run, but the glow agent outputs the following error:
    2017/03/21 09:41:24 Read size from -ct-0-ds-0-shard-4 offset 1054782852: read /Users/andrew/Desktop/GlowFolder/-ct-0-ds-0-shard-4-8931.dat: invalid argument
    How is read size determined and expected?

Failed to create a queue on disk error

I get the error Failed to create a queue on disk: Failed to open ... no such file or directory when a program is repeatedly executed (in distributed mode). It seems that the problem is related to size of the cluster - it happens often in cluster of 20 computers, but a cluster of 10 computers works fine.

The problem is caused by RotatingFileStore#init, which fails to open the old log files, and therefore, CreateNamedDatasetShard returns nil.

I think that the RotatingFileStore#init should not open the old log files, because they should already be removed by the previous statement in CreateNamedDatasetShard (done by m.doDelete). Isn't the problem caused by the ioutil.ReadDir(l.dir()), which returns old view of the file system?

Top of the stack trace:

2016/04/29 12:33:51 Failed to create a queue on disk: Failed to open bbaf2eac-ct-0-ds-2-shard-1111-8816-2016-04-29T12-01-44.112.dat: open bbaf2eac-ct-0-ds-2-shard-1111-8816-2016-04-29T12-01-44.112.dat: no such file or directory
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x4ca2f8]

goroutine 79 [running]:
github.com/chrislusf/glow/util.WriteBytes(0x0, 0x0, 0xc8201a2d90, 0x4, 0x4, 0xc82040c720)
/corpora/programy/manatee-go/git/src/github.com/chrislusf/glow/util/read_write.go:64 +0xf8
github.com/chrislusf/glow/agent.(_AgentServer).handleLocalWriteConnection(0xc82001c540, 0x7fe6fb99c2f0, 0xc8201b81c0, 0xc8201a1b80, 0x1d)
/corpora/programy/manatee-go/git/src/github.com/chrislusf/glow/agent/agent_server_write.go:25 +0x1bf
github.com/chrislusf/glow/agent.(_AgentServer).handleRequest(0xc82001c540, 0x7fe6fb99c290, 0xc8201b81c0)
/corpora/programy/manatee-go/git/src/github.com/chrislusf/glow/agent/agent_server.go:172 +0x854
github.com/chrislusf/glow/agent.(_AgentServer).Run.func2(0xc82001c540, 0x7fe6fb99c290, 0xc8201b81c0)
/corpora/programy/manatee-go/git/src/github.com/chrislusf/glow/agent/agent_server.go:135 +0xa1
created by github.com/chrislusf/glow/agent.(_AgentServer).Run
/corpora/programy/manatee-go/git/src/github.com/chrislusf/glow/agent/agent_server.go:136 +0x367

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.