Git Product home page Git Product logo

chapel-lang / chapel Goto Github PK

View Code? Open in Web Editor NEW
1.7K 64.0 409.0 975.07 MB

a Productive Parallel Programming Language

Home Page: https://chapel-lang.org

License: Other

Makefile 0.37% C++ 28.55% Perl 0.32% C 8.30% Shell 1.56% LLVM 0.05% Python 3.15% Emacs Lisp 0.03% Gnuplot 0.03% Chapel 57.32% TeX 0.05% Cuda 0.01% Fortran 0.05% Zimpl 0.01% Mathematica 0.01% CSS 0.01% HTML 0.01% JavaScript 0.14% Batchfile 0.01% Lex 0.03%
compiler gpgpu gpu hpc language parallel

chapel's People

Contributors

aconsroe-hpe avatar arezaii avatar ben-albrecht avatar benharsh avatar bhavanijayakumaran avatar bmcdonald3 avatar bradcray avatar cassella avatar danilafe avatar daviditen avatar dlongnecke-cray avatar e-kayrakli avatar gbtitus avatar jabraham17 avatar jeremiah-corrado avatar jhh67 avatar kyle-b avatar lydia-duncan avatar mppf avatar noakesmichael avatar npadmana avatar riftember avatar ronawho avatar shreyaskhandekar avatar spartee avatar stonea avatar sungeunchoi avatar thomasvandoren avatar tomyhoi avatar vasslitvinov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chapel's Issues

parallel file.lines()

We would like to have a parallel iterator that traverses a file, one line at a time.
We already have a serial file.lines() iterator.

Some initial code towards this goal is here:

https://github.com/tzakian/chapel/blob/bec7f05392141a508ed8042952c3e20c1375facf/modules/standard/ParSplit.chpl

and it is related to the HDFSIterator module.

Note that QIO includes a mechanism to figure out what block size should be used when reading a file and which Locales are "closest" to a particular block. This functionality is available in file.getchunk and file.localesForRegion.

record inheritance should be an error

Currently, record inheritance is syntactically allowed but can only be used to inherit fields.

We would rather have a 'mix-in' type strategy and disable record inheritance entirely.

User-defined Coercion and Cast Syntax

Chapel currently supports user-defined casts but not coercions. However, the way to declare a user-defined cast is undocumented and odd. Here is an example:

proc _cast(type toType, src) where isSomeType(toType) {
     ...
}

It would be nice to have a more user-facing feature here.

One proposal for the way to declare a user-defined cast is like this:

proc sourceType.cast(type toType) {
     ...
}

and then the way to indicate automatic coercion was possible between sourceType and toType is like so:

proc sourceType.coercible(type toType) param {
     return true;
}

Some sample use-cases include:

  • Matrix type
  • splitting out associative-array-as-set / array-as-vector into separate types
  • possibly for record wrappers in delete-free story (issue #5096)
  • coercing from strideable=false to strideable=true ranges

Design questions

updated with conclusions from discussion below:

  1. Should coercions be handled by something indicating if coercion is possible, and then a call to the cast function if so?
  • Yes

TODO based upon discussion below

  1. Should the cast be declared as a method or as a function? Or as a type method?
  • Type method
    • Function-approach allows covering multiple source types without syntax changes
    • Function-approach provides symmetry in how the coercible types are specified.
    • Method-approach does not pollute the global namespace
  1. What are the exact names of the 'can coerce' and 'cast' functions/methods?
  • isCoercible and cast

TODO based upon discussion below

  1. Can user-defined coercions be applied to method receivers?
  • Yes
  1. Can user-defined coercions returning a ref apply to ref arguments?
  • Yes
  1. Can user-defined coercions returning a value or const ref apply to in or const ref arguments?
  • Yes

Callback events in CHPL_COMM=gasnet not quite right

Found a couple of issues with the callback code:

  • chpl_comm_do_callbacks is called in some cases when the communication is local. For example in chpl_comm_put_nb.
  • chpl_comm_do_callbacks called twice for a single call. Again in chpl_comm_do_callbacks, if !remote_in_segment, there is another call to chpl_comm_do_callbacks from within chpl_comm_put which is of the put type. Not sure what type the callback should be in this case, but having both doesn't seems right.

string quoting in record I/O

Summary of the proposed change:

Fields in records quote strings:

record R {
  var s:string;
}

Before this proposed change,

writeln(new R("hi"))

would print out

(hi)

After the proposed change, it will print out

("hi")

Rationale

I'd like to be able to write and then read an array/record/whatever and expect it to work, but there are problems if the object contains strings. I'm proposing a change to the I/O behavior to fix this problem.

How did we get here? If you do:

writeln("hello world\n");

you (naturally?) expect "hello world" to print without quotes:

hello world

but then if you were reading that input and you did this:

var s: string;
readln(s);

s would contain only "hello", because the default string reading rule is to read one word only. Even if that were not the case, we wouldn't know when we reached the end of the string since it is not delimited in any way. (and making writeln print the quotes by default interferes with things like writeln("a = ", a) ).

When we get to a record, I don't think these choices are reasonable any more.

For example, if I have

record R { a: string, b: string }
writeln(new R("hello world", "second string"));

it will output

(a = hello world, b = second string)

and there's pretty much no way that the corresponding readln would work (what if the string contains commas or parenthesis?). I believe that the current code will only be able to read 1-word strings in records...

So, I'd like to propose the following change in I/O behavior:

  • writeln(string) and readln(string) continue to work the way they do

  • write/writeln/read/readln of any aggregate (record/class/array/tuple/etc) will switch the string formatting to double-quoted strings that would work in Chapel source code, so the second record example above would now output

    (a = "hello world", b = "second string")

Support 'in' as an infix operator for membership test

Add in as an infix operator in a way similar to Python, supporting such cases:

if value in Range { }

if value in Domain { }

if value in Array { }

Maybe strings too (#11439):

if subString in String { }

This form of in could potentially overload the D.contains() method.

This grammar must be implemented carefully such that:

for i in D

is not parsed as:

for D.member(i)

record list does not include a record destructor or = overload

The List standard module includes a record list type. One would expect this type to include a destructor that frees the list nodes.

This is not currently the case because of the use of record list in (array, domain, distribution) memory management. However recent changes to array memory management might mean that adding a destructor and = overload is now possible for record list.

Remove redundancy in defining back-end compiler names/properties

Historically, all that was needed to add a new back-end compiler was to add a file:

make/compiler/Makefile.foo

and to set:

CHPL_TARGET_COMPILER=foo

Unfortunately, over time properties like compiler names and version flags have crept into the script util/chplenv/compiler_utils.py which introduces redundancy (e.g., the mapping from 'gnu' to 'gcc' now occurs both here and in the Makefile) and results in an additional file that needs to be modified to introduce a new compiler. Failure to do so results in a lot of warnings during compilation when taking the original approach.

This issue asks whether the redundancy between the Makefiles and the compiler_utils.py script can be removed through refactoring, with the goals of (1) making it easier to add new compilers and (2) reducing the number of modifications required to use a compiler of a different name (say if one's 'gcc' binary was named something else).

This was specifically motivated by attempts on my part to use a different gcc and/or modify how gcc's version was queried by changing Makefile.gnu only to find that it didn't affect things as I'd expected because util/chplenv/compiler_utils.py contained parallel logic that also needed changing.

cannot use begin in an overridden method

The following program core dumps on master now:

class Parent {
  proc runTask() {
  }
}

class Child : Parent {
  proc runTask() {
    begin {
      writeln("IN TASK");
    }
  }
}


proc run() {
  sync {
    var obj:Parent = new Child();
    obj.runTask();
  }
}

run();

The problem is that the virtual-method call obj.runTask is passed 1 argument:

((void(*)(Parent_chpl))chpl_vmtable[((INT64(5) *
_virtual_method_tmp__chpl) + INT64(1))])(call_tmp_chpl4);

but the invoked function expects 2 arguments:

static void runTask_chpl2(Child_chpl this_chpl,
chpl___EndCount_atomic_int64_int64_t _endCount_chpl)

I think we need to investigating adding the endCount to task-local storage since it is not generally practical to modify all virtual children.

comm=ugni internal error (transfer too large?)

Bug Report

Summary of Problem

An internal user (Chapel team member) reports an internal error in a system library called from the ugni comm layer, in a program that attempts to transfer 10**30 ints between top-level locales in a single operation. The root cause has not yet been diagnosed, but it wouldn't be a bit surprising if this larger transfer needed to be broken up into smaller chunks, for the network to be happy.

Steps to Reproduce

Source Code:

Use the existing Chapel ISx test with large enough parameters, on a Cray XC with CHPL_COMM=ugni and a craype-hugepages module loaded. The heap has to be on hugepages because if it is not, there is already code to do the transfer using bounce buffers, breaking it up as needed, and the bounce buffers are much smaller (megabytes) and already known to work.

Reuse of strided put/get code from CHPL_COMM=none

It'd be nice to be able to reuse the CHPL_COMM=none code that does the local strided puts and gets in other comm layers when the put/get is found to be local (or even some of the logic for the remote case).

Currently the gasnet comm layer calls gasnet_[gets|puts]_bulk in all cases. It's possible the library does the right thing here, but the comm statistics are not consistent with the way they are reported in the other functions.

Implement an error handling model

DEPRECATED: see #8626

current tasks

awaiting implementation

  • standard library
    • investigate/remove try! and halt() from module code (#8452)
    • deprecate out error= functions
      • dependent on defer/deinit strategy, error memory management to store errors in vars
      • strategy:
        • rename and privatize them
        • change internal calls to the new name
        • make a wrapper with the old name, calling the privatized version
        • insert a compiler warning into the wrapper
      • update #7281
    • try! this.lock(); -> this.lock(); defer this.unlock();
  • add error handling to mason
  • throwing from non-inlined iterators (#7134)
    • MF note: fixing this will require possibly significant development effort (at least 1 day, maybe a week, maybe worse.) Also, one of my recent PRs added it-throws-inline.chpl and it-throws-noinline.chpl as tests.
    • ItemReader.these and channel.matches should be throws (turned off to work around failures in --baseline)
    • potential cause for memory leaks in the following tests:
      • errhandling/parallel/forall-range-throws
      • errhandling/parallel/forall-throws-subtasks
      • errhandling/parallel/forall-throws
  • incorrect line number from uncaught error due to inlining (#7297)
  • ensure catch clause lists go from specific to general
  • initializer error handling (#6145)
    • should be possible to clean up partially initialized state in phase 1
    • could either call deinit or manually clean up initialized state in phase 2

pending design

  • runtime errors
    • working conclusion: orthogonal to system resilience, waiting for use cases
  • make writeThis() a throwing method?
    • MF: More difficult than initially thought, I/O code uses deferred syserr, would need defer.
  • error memory management (#6428)
    • future: utilize Owned/Shared
    • note: Owned is a useful pattern for class cleanup on error
  • should errors interact with stack tracing?
    • idea: halt raises an uncatchable error, which stores the stack trace

testing and bugs

  • add Curl testing (#7570)
  • investigate failures on pgi (#5331)

documentation

  • chpldoc and prototype modules (#7489)
  • update doc/rst/developer/bestPractices/ErrorWarningMessaging.txt
  • reflect changes in CHIP 8
  • language spec changes

Overview

The goal of this effort is described in CHIP 8.

completed tasks

AST, syntax

  • add try statements to the AST (#4918)
  • set up the parser to handle try { } syntax (#4933, #4940)
  • codegen the block statement within try (#4975)
  • add throws to function declaration AST, syntax (#5007, #5011)
  • add try! support to try AST, syntax (#5038)
  • add throw to function body AST, syntax (#5046)
  • add catch block to AST, syntax (#5655)

lower error handling AST

  • add the pass (#5084 first draft, #5106 no-op)
    • after functionResolution and before callDestructors
  • inserting out formals into functions that throw (#5240)
  • halting on error for try! (#5240)
  • propagating errors for try (#5240)
  • handling errors with catch (#5655)
  • automatically deallocate Error objects that are consumed by catch (#5718)
  • nested try statements (#5718)
  • enclose catch filters with parentheses (#5721)

Additional v1 work

  • create tests
  • update CHIP to reflect auto-propagation in Default mode (#5012)
  • 4 motivating examples as .future tests (#5012)
  • modify existing Error module - rename it to SysError (#5162)
  • create the default/strict mode compiler flag (#5655)
    • Note: we will need a strategy that allows for gradual adoption of the error handling strategy in the standard/internal/package modules.

More completed

  • syntax highlighting for vim and highlight (#6277)
  • additional tests
    • exercise default and strict mode
    • other variables in scope (e.g. arrays) are freed appropriately (#5293)
  • print the Error type and msg if the error causes a halt
  • fix throws for generic function signatures (#6265)
  • use it in I/O library (#6963)
  • defer construct MF
  • properly direct errors from nested calls (#6607)
    • var x = mayThrowError(); doSomething(mayThrowError())
  • generic Error classes
  • syntax highlighting for emacs, ...
  • use init for Error (#7988)
  • exclude throwing out of defer, deinit (#8161)
    • such throws would be ubiquitous with no clear handling context

parallelism and multilocale

  • error handling works within task constructs
  • errors work across on statements MF
  • AsyncErrors or other construct exists to encapsulate multiple errors
  • errors propagate out of tasks in begin
  • errors propagate out of tasks in cobegin/coforall
  • errors propagate out of tasks in forall

design questions

  • fine-grained selection of error modes (#7055)
    • may fall back to the existing pragma based work
    • prototype module
  • throw nil should result in a NilThrownError (#7317)
    • PS
    • well known function calling into module, checks if nil before assigning to error variable
    • model on delete error
  • Should we divide SystemError into sub-classes for different kinds of errors? (#6297, #7218)
    • PS (start with FileNotFound factory)
    • yes, do it like Python does, e.g. FileNotFoundError is a subclass of SystemError with ENOENT.
  • unhandled error messages (#7013, #7119)
    • MF (name of error, location of creation)
    • use a mini-stacktrace format
 uncaught SystemError: No such file or directory in open with path "_test_cannotOpenMe.txt"
   myopen() at unableToOpenFile.chpl:1 (error created)
   main() at unableToOpenFile.chpl:5 (error not caught)
  • forall error type consistency (#7046)
    • PS
    • All ErrorGroups can only have 1 level underneath them
    • Errors from within a forall are always reported in an ErrorGroup
      • put inside lowerIterators?
  • error handling and sync blocks (#6853)
    • sync blocks throw, but some want more discussion
  • ErrorGroup or something else ? (#6854)
  • relax try, throws, try! (#6566)
    • continue to have relaxed try!
  • Should OOB access throw? (#6629)
    • no, leave it alone

high priority

  • fix bugs
    • unhandled stderr.writeln() in default mode catch block #7130 (#7269)
    • Error leaks with preemptive throws (#7331)
  • error handling primer
  • add tests
    • PS
    • run -memleaks on error handling test directory
    • throwing from iterators - add tests
  • single line assignments with throwing calls don't parse
    • MF
    • var x = try! mayThrowError();
    • return try g(); a-la Swift
  • fix error handling with dynamic dispatch (#6315) - also error if you try to override with different throws-ness
    • MF
  • Migrate more standard modules to new error handling (look for out error or error=)
    • PS
    • Spawn (#7166)
    • HDFS (#7176)
    • Regexp (#7153)
    • FileSystem (#7174)
    • Buffers (#7177)
    • Path (#7177)
    • BigInt: only in bigint constructor, wait for initializer story
  • module code TODOs
    • PS
    • adjust I/O tests with error= to use new mechanism (#7108)

declaration lists break 'config type'

Bug Report

Summary of Problem

It seems that config type statements that define multiple symbols don't work. See test/types/config/twoConfigsOneDecl.chpl for an example.

Steps to Reproduce

Source Code:

config type randType = uint(32),
            char = int(8);

Compile command:
chpl twoConfigsOneDecl.chpl

Execution command:
N/A (gets internal error)

Configuration Information

  • Output of chpl --version: 1.14.0.7f81ddd
  • Output of $CHPL_HOME/util/printchplenv --anonymize: N/A
  • Back-end compiler and version, e.g. gcc --version or clang --version: N/A
  • (For Cray systems only) Output of module list: N/A

Add a standard iterator that chunks a range in a round-robin fashion

An idiom that seems reasonably familiar to me (and came up recently for me in test/release/examples/benchmarks/shootout/fasta.chpl) is chunking a range up by a given chunk size in a block-cyclic manner.

For example, fasta contains:

for i in tid*chunkSize .. n-1 by numTasks*chunkSize {
  const bytes = min(chunkSize, n-i);

where tid ranges from 0 to numTasks. The 'bytes' variable essentially asks "how large is my chunk". The loop itself is somewhat ugly looking. Could/should we come up with a better interface (as part of the RangeChunking module, e.g.) that would permit something more like:

for chunk in greatIteratorName(0..#n, chunkSize, tid, numTasks) {
  const bytes = chunk.size;

where the yielded 'chunk' is a range? ('greatIteratorName' is a placeholder because I don't have a great name for this in mind right now).

Stop .notest`ing multilocale/lydia/checkNumTasks

multilocale/lydia/checkNumTasks is currently racey, so I .notest'ed it with #5029

It basically does:

coforall loc in Locales do on loc {
  num.add(here.runningTasks());
}

which is transformed into something like (some pseudocode + manually inlining functions):

endCount = endCountAlloc()
for loc in Locales {
  ...
  // runtime call to startMovedTask will bump here.runningTasks
  spawn_task_to_loc(loc, num.add(here.runningTasks()) 
}

/* snippet of inlined waitEndCount below: */

// Remove the task that will just be waiting/yielding in the following
// waitFor() from the running task count to let others do "real" work.
here.runningTaskCntSub(1); 

endCount.waitFor(0);

// re-add the task that was waiting for others to finish
here.runningTaskCntAdd(1);

Technically there will be 2 tasks running on locale 0, but we lie and decrement the runningTasks counter since the spawning task will just we waiting/yielding and not doing any real work or using up many cycles if there are other tasks running.

However, this the original task on locale 0 and the task spawned to locale 0 doing the here.runningTasks() query can run in either order, so we have a race for the call to here.runningTasks() is racey.

Applying #5018 to coforall+on constructs will resolve this race, because we'll adjust the runningTasks counter before creating any tasks. Until that's in I'm just .notest'ing this test to avoid noise in nightly testing.

non-inlined iterators can leak memory

Non-inlined iterators leak memory if the iteration is broken out of.

See

test/types/records/ferguson/leak-futures/iterate-*.chpl

Inlined and non-inlined iterators generate pretty different code so the potential solutions are:

  1. use a different approach for freeing captured iterator variables in the inlined vs non-inlined iterator cases. Here, for non-inlined iterators, we could default-initialize the iterator class fields and use the = overload to updated them and finally call destructors on them in freeIterator.

  2. use a different strategy that works with the advance() function usually freeing the memory. For example, we could generate several code blocks in freeIterators that frees only the variables that we know were created by that point in the iterator's progress (based on the iterator variable tracking the iteration).

I think this leak exists for inlined iterators, and so (2) is probably the way to solve it.

user --help should override built-in

Feature Request

Summary of Problem

It seems to me that defining a config named 'help' should take precedence in argument parsing over the built-in '--help' flag that all Chapel executables support. E.g., for the example below, I'd expect printUsage() to be invoked when running with --help, but instead the compiler-provided help message is printed.

Steps to Reproduce

Source Code:

This is taken from test/execflags/bradc/configHelp.chpl:

config const help = false;
if help then printUsage();

proc printUsage() {
  writeln("This is my custom usage message!");
  exit(0);
}

Compile command:

chpl configHelp.chpl

Execution command:

./a.out --help

Configuration Information

  • Output of chpl --version: 1.14.0
  • Output of $CHPL_HOME/util/printchplenv --anonymize: N/A
  • Back-end compiler and version, e.g. gcc --version or clang --version: N/A
  • (For Cray systems only) Output of module list: N/A

Support initialization of atomic variables

At present, atomic variables can't be initialized:

var myAtomic: atomic int = 42;

The historical reason for this is that we haven't distinguished initialization from assignment (though that's changing!) and don't permit direct assignment to atomics. This idiom comes up reasonably frequently and is painful to code around, so it'd be really nice to enable it as soon as we can.

(Note that some have argued that we should also support direct assignment to / reads from atomics. I'm not advocating for that here, and would to keep that potential issue separate from this one. I think the initialization case is more of a no-brainer because it's obvious that there could be no race-y reads/writes in the initialization context).

[Jira-199] Parallel Graph Library

[Jira-199]

Directly applying existing data-parallel tools to graph computation tasks can be cumbersome and inefficient. Hence, if there is a Graph library in Chapel that can work with both shared and distributed memory systems to provide both flexibility and performance to the user of Chapel.

Here are a few references:
1. https://amplab.cs.berkeley.edu/wp-content/uploads/2014/09/graphx.pdf
2. http://www.osl.iu.edu/publications/prints/2005/Gregor:POOSC:2005.pdf

[CHAPEL-199] created by rohanbadlani

Using classes without needing to delete

It would be nice to be able to write Chapel programs that used clasess without ever needing to manually write delete statements.

The existing automatic memory management for arrays, domains, and distributions offers some evidence that this idea is possible for arbitrary user-defined classes.

For inspiration, look to C++ shared_ptr and unique_ptr and the Box type in Rust. We should expect that users will be able to get bare pointers out of these types. (Note it's important to be able to call methods or functions working with the bare pointer type somehow - possibly by explicitly getting the bare pointer - because otherwise one wouldn't be able to do much with the type managing the memory).

Initial Design direction:

  • equivalents to C++ shared_ptr and unique_ptr are implemented as records containing a field of class type that points to the managed memory.
    • these implemented as Shared and Owned in PR #5683.
  • these records will use existing (but not yet specified) language features of copy-initializer, assignment operator, record destructor to manage the memory
  • these records need to be able to forward method invocations to the class
    • PR #5683 uses PR #5058 to forward the method invocations

Open Design Questions:

  • If class A inherits from class B, A is a subtype of B. Should e.g. Shared(A) be a subtype of Shared(B) ?
  • Should a variable of type Shared(A) be passable to a function expecting a bare A? (or, would the code making such a call need to use a method to get the bare pointer out of Shared(A) ?)
  • Should operators on A be available for Shared(A) ?
  • Should methods on A be available for Shared(A) ?
  • Should fields in A be available for Shared(A) ?
  • Should the compiler check that the contained pointer is never used in the case that an Owned/Shared is empty?
  • Should the compiler include checking for use-after-free (like a "borrow checker"?)
  • For Owned, should initialization of one Owned from another transfer ownership?
  • For Owned, should assignment of one Owned from another transfer ownership?
  • For Owned, should a copy implicitly added by the compiler transfer ownership?

Language Strategy Questions:

  • What language features do we need to make this feature satisfying? The features might include:
    • user-defined coercions
    • direct support for method forwarding
    • move function in library akin to C++ std::move
    • more direct compiler support
    • ability to specify operators as methods

Argument parsing errors with start_test

Bug Report

Summary of Problem

I found a few issues with argument parsing:

  • The --compopts and --execopts flags do not work when using = (a space must be used). It appears as though the first character is truncated.
  • Using shortened versions of the above flags, e.g., --compo or --execo, with a space and an argument, results in a start_test usage: error:
start_test: error: argument -compopts/--compopts: expected one argument
  • Certain styles of invalid flags results in the flags being interpreted as test names (e.g., --compt="--fast --static" does so, but not --compt="--fast" (correctly returns usage message), --numx="2 x" but not --numx=2)

Steps to Reproduce

Source Code:

N/A

Compile command:

N/A

Execution command:

N/A

Configuration Information

  • Output of chpl --version:
    chpl Version 1.14.0.a10da81
  • Output of $CHPL_HOME/util/printchplenv --anonymize:
    N/A
  • Back-end compiler and version, e.g. gcc --version or clang --version:
    N/A
  • (For Cray systems only) Output of module list:
    N/A

printchplenv should infer CHPL_HOME similar to chpl binary

Summary of Problem

The chpl binary can infer the path of CHPL_HOME if it is not set based on a variety of checks defined in compiler/driver.cpp. printchplenv does not have the equivalent checks, and therefore can lead to disagreement between the two, when CHPL_HOME is being inferred.

Proposed Solution

printchplenv should use the same rules that the compiler uses to infer CHPL_HOME when it is not set.

Example

unset CHPL_HOME

# CHPL_HOME is inferred using the rules defined in compiler/driver.cpp:
>chpl --print-chpl-home
/path/to/CHPL_HOME /path/to/chpl


# CHPL_HOME is empty, and confusingly labeled with a '*'
# which is intended to mean the default value has been overridden (which it hasn't in this case):
>printchplenv --anonymize
machine info: ...
CHPL_HOME:  *
script location: /path/to/CHPL_HOME/util/
CHPL_TARGET_PLATFORM: darwin
...

CI awareness of test changes

Many of our failures in nightly testing are due to minor mistakes in adding new tests or modifying existing tests. At present, our CI runs only test a few things, just to make sure the world isn't completely broken. In the event that tests themselves are changed in a given PR, it would be nice if the CI would run those tests or perhaps the directories that contain them. While this would not provide airtight testing coverage (and realistically, little will), it would help by catching low-hanging fruit that did not show up in the developer's environment for some reason (e.g., developer changed a correctness test but forgot to check the corresponding performance test mode; developer failed to commit a new file that the test depended on; etc.)

Check annotations file as part of the smoke test

Check the ANNOTATIONS.yaml file as part of the smoke test.

We often get the syntax wrong as seen in #4828, #3911, #4997, and many others.

Should be relatively simple to do: I think we just need to add a list of valid configs to annotate.py and then have a self check mode that reads through all the annotations and validates that they're formatted corrected.

Bonus points for going through the PR numbers and making sure that we annotated the dates correctly.

Note that this would require that the smoke test has PyYAML available, so to start we could just make this a developer only tool that perf team members would run.

Have testing mails report relative failures

The concept here is to have the mails from nightly cron jobs report failures relative to other configurations in order to reduce the overall amount of redundant mail. For example, if someone checks in an inherently broken test that is run in every configuration, we currently receive a mail per configuration reporting that information which results in more noise than value. Instead, it would be nice to establish a shallow hierarchy of test configurations such that if a test broke in a parent configuration, it would not be re-reported in the child configuration. For instance, say that a developer checks in an inherently broken test. Since it would fail on linux64 (which I propose should serve as our "root" configuration), we would not need to re-report it on linux32 or darwin or --no-local or ... Similarly, a failure on --no-local testing should not be re-reported on gasnet, and so on.

The main challenge here would likely be the question of scheduling the test configurations such that a late-arriving configuration didn't hold up the summaries of earlier configurations unnecessarily. For instance, I believe that linux64 testing is currently completing quite late in the day which would be a bad property of the root configuration.

Chpldoc documentation of enum symbols

Bug Report

Summary of Problem

  • Today, chpldoc will only generate a single documentation comment for an enum. There is no way to create documentation for individual constants within the enum, which would be natural.

  • Additionally, if you try to document the first constant, it will overwrite the documentation for the
    enum as a whole. See this future.

Steps to Reproduce

Source Code:

/* enum documentation */
enum Color {
  /* first constant doc */
  Red,
  /* second constant doc */
  Yellow
};

should perhaps yield something like this in plaintext mode (.rst mode not included):

enum Color

 enum documentation

 Red
    first constant doc

 Yellow
    second constant doc

Compile command:

chpldoc docConstants.doc.chpl

Configuration Information

This impacts all known versions of Chapel which support chpldoc and enums

[Jira-196] Support for postfix and prefix increment and decrement operators.

[Jira-196]

Many times programmer may want the flexibility of using postfix/prefix increment and decrement operators as flexibility.

If the compiler supports this, this can be done in a single instruction, increasing the performance.

Example: After fixing this, the following should be supportable:

var x: int = 5;
writeln(x++);
writeln(x--);
writeln(++x);
writeln(--x);

Output:
5
6
6
5

A sample mergeSort() code is also attached with this issue, illustrating why sometimes it may be easier for the programmer to use the postfix/prefix operators.

[CHAPEL-196] created by rohanbadlani

Improve spec description of 'this'

This task involves completing PR #3125 which attempted to improve the description of 'this' methods (supporting direct access/indexing) in the spec. While the PR received decent support on GitHub, in email @noakesmichael expressed reservations (see below) which I never addressed. This task is meant to capture this TODO in order to free up a long-open PR.

Mikes feedback was as follows:

  1. Section 10.6 (Indexing Expressions) creates a relationship between “indexing” and the this() method. I thought this might be a good anchor point for understanding the index method.

Unfortunately I don’t see how Indexing Expressions are defined. For example it’s not a sub-production of expression.

Section 10.10 LValue Expressions suggests a relationship between “Indexing expresions” and parenthesized-expression but I don’t think that really intended. It’s not something that follows naturally from 10.0 and 10.4 and it wouldn’t support the use of square brackets.

  1. Several sections talking about “indexing” with a nod to array indexing even though array indexing isn’t defined until 20.3

20.3 speaks specifically about “a reference” which suggests that this() for array must return a ref. But of course that does not require all index() functions to return a ref.

So there isn’t much in the spec that makes it clear what the constraints on index().

The relationship between the index method and the syntax for call-expression is stated but, as we can tell, it’s a little oblique. I don’t think this can be readily solved by a few extra words in 15.6 and 16.7

rename domain.member?

The current method name for querying if a value is in a domain is domain.member().

E.g.

var D:domain(int);

D += 4;

assert(D.member(4));
assert(!D.member(3));

This method name bothers me for English/Programming Style reasons. member is a noun and I really want the method name to be a verb in this case.

What should we do?

  1. add domain.contains()
    a) and deprecate domain.member()
    b) and keep domain.member() as a synonym
  2. enable parser/build to translate x in someDomain into this query (#5034)
    a) and deprecate domain.member()
    b) and keep domain.member() as a synonym
  3. do nothing

Investigate apparent loss of FFT performance

As shown on the graph linked to below, on November 26-27 2016, 16-node HPCC FFT performance got significantly better in the ugni+qthreads configuration. Then on December 7th, the allocation of args on the stack rather than the heap set these timings back. Some of the performance was recovered on December 14th, but it looks like it did not fully recover.

This issue is meant to capture a desire to understand whether there are further improvements that could/should be made to recover the previous performance. In this morning's performance meeting, it was suggested that the right time to investigate this might be after Qthreads is using Chapel's memory management for its stacks.

http://chapel.sourceforge.net/perf/16-node-xc/?startdate=2016/10/31&enddate=2016/12/15&graphs=hpccfftperfgflopsn220

Improving LLVM Debug Information

PR #4342 enabled initial generation of debug information with --llvm. However, some issues remain:

  • remove 'type stubs' workaround
  • generate debug symbols for local variables
  • figure out a way to associate a Chapel AST id with LLVM IR for compiler debugging
  • get variable names to be preserved from earlier parts of compilation

awkward to use readf to read a hash value

A hash value (such as SHA-1) is normally written in hexadecimal and can be stored in several integers. However, it is hard to write a call to `readf' to read such an long hexadecimal value into a group of integers. It would help if there was a way to specify, in readf, the maximum field width when reading an integer.

See test/studies/dedup/dedup-distributed-ints.chpl

Setting a union field from the union's constructor leaves the union undefined.

Bug Report

Summary of Problem

Setting a union field from the union's constructor leaves the union undefined. Given a union containing an int, i, and a real, r, named IntOrReal, constructing it like:

var ir0 = new IntOrReal(i=1);
var ir1 = new IntOrReal(r=2.0);

both leave the field values unset.

Steps to Reproduce

This behavior is captured in the future:
test/classes/diten/constructUnion.future

Configuration Information

chpl Version 1.14

Verify Chplvis output in testing

Summary of Problem

I believe all of our current chplvis tests only use the program output for the .good file, and do not look at the actual output files used by the chplvis GUI. While it's not practical at this time to automate testing for the GUI itself, we could probably do some kind of verification for the output files.

One possibility could be to count the number of GETs and PUTs, like our existing CommDiagnostics tests.

avoid term 'lvalue' in error messages

Bug Report

Summary of Problem

The compiler currently uses the term 'lvalue' in its error messages. I think we should stop doing this.

Steps to Reproduce

Source Code:

From test/variables/constants/assignConstError.chpl added in PR #5067 :

const two = 2;
two = 3;

Compile command:
chpl assignConstError.chpl

Execution command:
N/A

Configuration Information

  • Output of chpl --version: 1.14.0.7d05050
  • Output of $CHPL_HOME/util/printchplenv --anonymize: N/A
  • Back-end compiler and version, e.g. gcc --version or clang --version: N/A
  • (For Cray systems only) Output of module list: N/A

Automatic tracking from issues to .futures and back

All .future tests should be linked with issues.

When a new .future is added, there should be an automatic check that it is linked to an issue. When a .future is removed, there should be an automatic check that it was also de-linked with the issue, possibly with the suggestion that the issue can be closed if there are no more .futures associated with it.

Ideally this would happen in the pre-merge travis checks for any PR that adds or removes a .future.

Dealing with outdated version-independent documentation

Some of the version-dependent documentation contains version-independent information:

For example, with the switch over to GitHub issues, our old Bugs pages now contains outdated instructions, which will likely cause confusion for users who stumble upon the outdated pages:

Solutions

Short term

  • Manually edit the html to redirect these pages to the 'master' version.
    • We may end up needing to do this regardless of the longer term solution we choose below.

Long term

There will likely be other instances of this problem in the future. Here are some ideas on how to prevent this in the future:

  • Create a mechanism (Sphinx extension?) to flag pages as being version-independent, so that they always auto-redirect to Master, or get a .. warning: in the header, containing a URL to the 'master' version.
  • Hoist pages like "Bugs" out of the versioned-documentation.

Improving the LLVM backend

Future Work on LLVM Performance:

  • Address the suggestions here: http://releases.llvm.org/4.0.0/docs/Frontend/PerformanceTips.html
  • loop vectorization hints for forall/vectorizeOnly loops. Potentially both llvm.loop.vectorize.enable and llvm.mem.parallel_loop_access are important. See codegenOrderIndependence in CForLoop::codegen(), which is only called for the C backend right now. PR #6533
  • improving type-based alias analysis metadata
    (use struct-path-aware TBAA)
    (TBAA generation may be turned off now; search for codegenMetadata)
  • indicating when a load is from a variable declared 'const'
    (see commit 6f0047a.
    This is tricky because during variable initialization, a
    const variable is mutable and could be read). Use LLVM invariant metadata. (#6706)
  • investigate enabling the Polly polyhedral optimizer
  • (for --llvm-wide-opt) change to struct wide pointers (vs. packed) (#7487)
  • try -mllvm -use-cfl-aa for clang with --llvm-wide-opt. (It didn't seem to help; see #5533).

Easily map chplvis events to generated C code

While investigating performance for Chapel benchmarks, I find that the line number and file are not quite high-resolution enough to find the offending GET or PUT in the generated C code. It's not uncommon for a series of GETs in the generated code for an iterator to share these two pieces of information. Even if gets and puts could be mapped to the internal line/file, I'd still want to be able to find the event in the case that the function was duplicated (say, for generics).

I'm proposing that the chplvis output also include a unique identifier for each generated GET and PUT. I expect this to be simple, and that it would bump the chplvis version from 1.2 to 1.3. Other events may benefit from this kind of identifier, but I do not plan on implementing that functionality in this initial effort.

I plan on appending this unique identifier to the end of the data line for the various kinds of gets and puts (normal, strided, nonblocking). The runtime comm layers and chplvis parser will need to be updated accordingly.

For the chplvis GUI, I would expect it to ignore the unique ID.

[Jira-197] Support for conditional operators.

[Jira-197]

Readability is improved in situations where some simple conditions need to be tested if the conditional operator is available.

Example:
proc max(a:int, b:int): int

{ return (a > b) ? a : b; }

Also, defining constants becomes possible with conditional statements because the value of a const all has to be assigned within that single statement.

Example:
const max_num:int = (a > b) ? a : b;

The conditional operator may even help the programmer avoid unnecessary null checks for object references.

var obj: MyClass = (old_obj == null) ? null : old_obj.attribute1;

Support for this in Chapel will help the programmers in writing better readable code and will also help write compact code.

[CHAPEL-197] created by rohanbadlani

when should an array be resized when it is read?

I've found it useful to allow some Chapel arrays to be read without knowing their size in advance. In particular, non-strided 1-D Chapel arrays that have sole ownership over their domain could be read into where that operation will resize the array to match the data read. At one point, I prototyped this for JSON and Chapel style textual array formats (e.g. [1,2,3,4] ).

Here is an example of the code I'm interested in supporting:

var A:[1..0] int;
mychannel.read(A);

could read into A any number of elements and adjust its domain accordingly. The alternative is that the code above only reads 0-long arrays.

This case is particularly relevant because one might imagine a record that stores a variable-length array. Can the default readThis operation provided by the compiler do something reasonable?

record ContainingArray {
   var A: [1..0] int;
}
var r:ContainingArray;
mychannel.read(r);

Or, is it necessary for authors of such records to implement a custom readWriteThis method if they wanted I/O to work in a reasonable manner?

There are three key questions:

  1. Does changing the size of a read array when possible seem like the right idea? Or should reading an array always insist that the input has the same size as the existing array (which I believe is behavior that matches the rest of the language for arrays that share domains...)

  2. Should any-dimensional rectangular arrays be written in binary in a form that encodes the size of each dimension? (In other words, write the domain first?). Such a feature would make something like (1) possible for multi-dimensional arrays but might not match what people expect for binary array formats. (I don't think we've documented what you actually get when writing an array in binary yet...)

  3. Any suggestions for a Chapel array literal format for multi-dimensional arrays? How would you write such arrays in JSON (and would anyone want to)? At one point there was a proposal to put the domain in array literals, like this:

var A = [ over {1..10} ];

but that doesn't really answer how to write multidimensional array literals. One approach would be to store the array elements in a flat way and just reshape them while reading; e.g.

var A = [ over {1..2, 1..3}
          11, 12, 13,
          21, 22, 23 ];

where the spacing would not be significant.

If we had a reasonable format, we could extend support like (1) to any-dimensional arrays that do not share domains, even for some textual formats.

futures in Chapel

This issue is for tracking the feature request for 'futures' in Chapel.

Some work towards adding futures was completed around 2013. See slides summarizing that work here:

http://chapel.cray.com/presentations/SC13/03-futures-imam.pdf

Here is an email thread about that work:

https://sourceforge.net/p/chapel/mailman/message/30815892/

Here is the branch containing that work:

http://svn.code.sf.net/p/chapel/code/branches/collaborations/futures

(and here is a link to some earlier versions of prototype code: https://sourceforge.net/p/chapel/code/21386/tree/branches/collaborations/futures/test/release/examples/futures/ )

Here is a discussion on chapel-users about retiring 'single' and replacing it with 'future'.
https://sourceforge.net/p/chapel/mailman/chapel-users/thread/alpine.LNX.2.00.1309131418050.327%40bradc-lnx.us.cray.com/#msg31409942

Here are the tasks that were outlined to complete 'futures' support in the language:

  • make sure the old proposal / strategy is still reasonable and reasonably well supported
  • add future types and implement begin expressions (ie compiler support for begin expressions) e.g. const f: future int = begin computeSomeNumber(); f.read();
  • implement future as a library using sync variables
    • add automatic coercions from future(T) to T. Compiler adds type inference support and generates calls to read. E.g. const x = begin foo() + begin bar().
  • add some form of statement-block expressions. The goal here is to enable putting multiple statements inside of begin expressions. One possibility is to use a syntax like ({ a; b; c; }) which would enable examples like const x = future int = begin ({ var a = computeSomething();a; }) .
  • compiler/runtime optimizations. Avoid task creation for short-lived computations. Enable a data-driven task runtime to make use of futures information.

Rename 'param'?

'param' in Chapel is the keyword used to introduce a compile-time constant that can be computed and reasoned about by the compiler. Since the introduction of the keyword, there has been discomfort with the name, in part because it seems unintuitive and in part because of potential confusion with the notion of "function parameters" (which we very carefully try to always call "arguments" in Chapel to avoid confusion). I believe that the choice of 'param' was taken from Fortran and since the introduction of the keyword, we've struggled to come up with a worthy replacement, yet have failed to ever replacement. This issue is designed as a place to capture some of the best counter-proposals, as our ability to change it gets smaller each year.

My preferred replacement keyword for 'param' would be 'value' since I believe that term means very much what our notion of 'param' means today -- a value (in this case, named) which the compiler can reason about and compute on. It's also a term that makes good sense in both the declaration and argument-passing contexts:

value numDims = 3;
proc foo(value rank, x, y) { ... }

At times, we've discussed using 'val' instead of 'value' as shorter, unambiguous, and symmetric with 'var'. Concern has arisen at times that 'val' and 'var' may be too close to each other, introducing the potential for simple mistakes, not to mention confusion for non-native English speakers who may have trouble distinguishing the L and R sounds. To that end, here I'm proposing 'value' which has an equal number of characters as 'param' and is more visibly and audibly distinguishable from 'var'.

The main downside to this approach that I can see is the same downside as any change we might make: This is a feature that's been around for a long time and permeates our code. That said, I think that it remains somewhat regrettable and that if we don't change it now, we won't ever be able to.

The second downside gives me a bit more pause, and that's an argument that at times we've discussed wanting the compiler to not only have a notion of compile-time constants (what 'param' is today) but also compile-time variables that could be re-assigned over compilation. If we went down this path, it might suggest replacing 'param' with some sort of modifier on 'var' and 'const' like:

compiler const numDims = 3;
compiler var x = 0;

On the plus side, a prefix-based modifier like this has the benefit of more orthogonally capturing [compile-time vs. run-time] x [constant vs. variable]. But on the minus side, it seems less natural in the argument passing context. I.e.,

proc foo(compiler const rank, x, y) { ... }

To that end, I continue to think that 'value' as a sibling to 'const' and 'var' has value (ha ha) and that if/when we introduce a notion of a compile-time variable, we seek another term for it.

(I'll also note that some on the team might argue that compile-time vs. run-time are things that are regrettable to emphasize in a language more than necessary, and while I typically am not strongly in that camp, I do think that 'value' has an attractively more context-free meaning than 'compiler const' while also being a much more self-defining term than 'param').

Atomic operations should disable RVF

Bug Report

Summary of Problem

Atomic operations represent a memory fence and should disable the remoteValueForwarding optimization, but do not currently do so.

Steps to Reproduce

An example of this incorrect behavior has been captured in the following future:

test/optimizations/remoteValueForwarding/bharshbarg/atomicsDisableRVF.chpl

This program uses the CommDiagnostics module to track the number of GETs. By running this program with multiple locales you can see that there are zero gets, where there should be at least one.

Configuration Information

CHPL_COMM != none
Present in v1.14, and likely many previous versions.

CHPL_UNWIND=libunwind nightly testing

CHPL_UNWIND=libunwind should be tested nightly as a prerequisite for turning it on-by-default in some configurations.

Note that util/test/prediff-for-stacktrace is supposed to help with the various forms of mismatch between a .good file and a .chpl file, in case one has a stacktrace and the other does not.

Also note that stack traces might include call sites on linux64, but generally won't include call sites on Mac OS X. And also note that stack traces do currently include line numbers from internal modules.

[Jira-198] Support for stack and queue data structures.

[Jira-198]

Stack and Queue, Priority Queue are very popular Data Structures and I feel that having these in a library is of paramount importance for any programming language.

For Chapel in specific, I believe we can even have concurrent versions of these data structures - thread-safe (or task-safe if you will) access to these data structures.

These data structures can be implemented using the List Data Structure in Chapel. But, for that we require accessing elements of a List using indices which is currently not supported by the List Module. It would be great to have these data structures.

[CHAPEL-198] created by rohanbadlani

slurm-srun launcher should calculate CHPL_LAUNCHER_CORES_PER_LOCALE accurately

Chapel's slurm-srun launcher should be able to accurately calculate CHPL_LAUNCHER_CORES_PER_LOCALE internally (based on calls to Slurm sinfo), even if CHPL_LAUNCHER_CONSTRAINT is defined.

Ideally, similar Chapel launchers (slurm-gasnet-ibv) should be reviewed for the same problem.

At present, if CHPL_LAUNCHER_CONSTRAINT is used to select a subset of available compute nodes, but CHPL_LAUNCHER_CORES_PER_LOCALE is undefined, the slurm-srun launcher's internal calculation will likely be inaccurate because sinfo does not accept Slurm constraints as a command line arguments.

The desired code change would rewrite lines 129-151 of chapel/runtime/src/launch/slurm-srun/launch-slurm-srun.c:

   113	// set just use sinfo to get the number of cpus. 
   114	static int getCoresPerLocale(void) {
   115	  int numCores = -1;
   116	  const int buflen = 1024;
   117	  char buf[buflen];
   118	  char partition_arg[128];
   119	  char* argv[8];
   120	  char* numCoresString = getenv("CHPL_LAUNCHER_CORES_PER_LOCALE");
   121	
   122	  if (numCoresString) {
   123	    numCores = atoi(numCoresString);
   124	    if (numCores > 0)
   125	      return numCores;
   126	    chpl_warning("CHPL_LAUNCHER_CORES_PER_LOCALE must be > 0.", 0, 0);
   127	  }
   128	
   129	  argv[0] = (char *)  "sinfo";        // use sinfo to get num cpus
   130	  argv[1] = (char *)  "--exact";      // get exact otherwise you get 16+, etc
   131	  argv[2] = (char *)  "--format=%c";  // format to get num cpu per node (%c)
   132	  argv[3] = (char *)  "--sort=+=#c";  // sort by num cpu (lower to higher)
   133	  argv[4] = (char *)  "--noheader";   // don't show header (hide "CPU" header)
   134	  argv[5] = (char *)  "--responding"; // only care about online nodes
   135	  argv[6] = NULL;
   136	  // Set the partition if it was specified
   137	  if (partition) {
   138	    sprintf(partition_arg, "--partition=%s", partition);
   139	    argv[6] = partition_arg;
   140	    argv[7] = NULL;
   141	  }
   142	
   143	  memset(buf, 0, buflen);
   144	  if (chpl_run_utility1K("sinfo", argv, buf, buflen) <= 0)
   145	    chpl_error("Error trying to determine number of cores per node", 0, 0);
   146	
   147	  if (sscanf(buf, "%d", &numCores) != 1)
   148	    chpl_error("unable to determine number of cores per locale; "
   149	               "please set CHPL_LAUNCHER_CORES_PER_LOCALE", 0, 0);
   150	
   151	  return numCores;
   152	}
  • Replace the existing call to sinfo, above, with an sinfo call like this:
    sinfo --exact --format="%c %f" --sort="+=#c" --noheader --responding

  • Instead of just reading the first line from the sinfo output, as above, the code change should

    • examine env variable CHPL_LAUNCHER_CONSTRAINT, which (in Slurm) can be a complex expression including Slurm Feature names, numerical counts, and logical operators
    • extract some or all of the Features contained in this expression
      • a workaround recently implemented in a Cray internal build script, simply preserves any/all tokens matching regex ^[A-Za-z][A-Za-z0-9_]*$ (assuming such tokens are all Slurm Feature names), and discards anything else.
      • That regex was a guess: I could not find any support for it in Slurm documentation.
      • The idea to just match on any Feature name seen in the CHPL_LAUNCHER_CONSTRAINT expression was just a guess. However, if a user ever did craft some complicated CHPL_LAUNCHER_CONSTRAINT expression that was incompatible with this simple scheme, that user could override the implementation by defining env var CHPL_LAUNCHER_CORES_PER_LOCALE, as we are doing now.
      • An ideal implementation would detect CHPL_LAUNCHER_CONSTRAINT expressions that were too-complicated to evaluate, and emit an error message.
    • select the first line from the sinfo output where the second token on the line (%f, Feature) matches any of the CHPL_LAUNCHER_CONSTRAINT Feature names
    • return the first token on that line (%c, CPUs) as the value of CPUs per node.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.