plaidweb / pushl Goto Github PK
View Code? Open in Web Editor NEWPush notification adapter for feeds
License: MIT License
Push notification adapter for feeds
License: MIT License
Instead of the silly ThreadPoolExecutor
stuff this would be a really good candidate for use of asyncio
and aiohttp
.
<img src>
, <video src>
, etc. should also send a webmention. For media targets it should only be necessary to HEAD
, rather than GET
, the resource, and <script src>
should probably be excluded. There is also no reason to look at rel
on this.
Pushl should be able to support private webmentions by supporting AutoAuth or some other bearer token mechanism.
Currently, pinging an entry by URL will only send the pings from the current canonical URL. Sometimes it's necessary to manually send pings from an old or non-canonical URL; for example, forcing a receiving website to update the canonical URL on an outdated ping (when the interim update failed for whatever reason, or the user hasn't been using the caching mechanism).
So, there should be a command-line option that makes process_entry_mentions
send the pings from url
instead of entry.url
.
ERROR:urllib3.connection:Certificate did not match expected hostname: themindfulnessapp.com. Certificate: {'subject': ((('organizationalUnitName', 'Domain Control Validated'),), (('organizationalUnitName', 'EssentialSSL Wildcard'),), (('commonName', '*.binero.se'),)), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'COMODO CA Limited'),), (('commonName', 'COMODO RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': '4AD42FAD2417190F22820224FF009436', 'notBefore': 'Apr 3 00:00:00 2018 GMT', 'notAfter': 'Apr 3 23:59:59 2019 GMT', 'subjectAltName': (('DNS', '*.binero.se'), ('DNS', 'binero.se')), 'OCSP': ('http://ocsp.comodoca.com',), 'caIssuers': ('http://crt.comodoca.com/COMODORSADomainValidationSecureServerCA.crt',), 'crlDistributionPoints': ('http://crl.comodoca.com/COMODORSADomainValidationSecureServerCA.crl',)}
It would be nice if these errors also showed what the origin of the connection request was.
Pushl only supports webmention, but a lot of sites only use pingback. If Pushl doesn't see a webmention endpoint on a page, it should check for a pingback endpoint and use that instead.
This appears to be the current standard: http://www.hixie.ch/specs/pingback/pingback
If an entry URL changes, Pushl should re-send all of its mentions from the old URL so the endpoint knows to update them.
This can happen in Pushl.process_entry
, by looking to see if url != entry.url
and making the target set entry.get_targets(self) | previous.get_targets(self)
. Then the API for Pushl.process_entry
changes to be the entry url instead of the entry object, and it sends based on the original URL rather than the resolved one.
While most uses of RFC5005 use the namespace fh
it oculd really be anything. look at the feed namespaces to figure out which namespace to use for the tag.
If a pertinent mf2 attribute (p-title, p-summary, e-content) on an entry changes, pings should be re-sent to all targets, not just the ones where URLs changed.
However this should only happen when actual entry content changes and not based purely on the overall content hash (eg nav links, “15 minutes ago” publish times, etc)
It looks like there's something wrong with connection pooling where the server might disconnect and the pool expects the connection to still be there. Example:
WARNING:pushl.entries:Entry http://beesbuzz.biz/blog/3978-Reprogramming-my-sleep-cycle: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/blog/8118-So-what-is-Subl-anyway: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/experiments/5720-sawbench-test: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/3987-Strangers-Official-video: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/covers/5274-I-Dont-Believe-You-Magnetic-Fields: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/experiments/2314-error-pages: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/1702-Boffo-Yux-Dudes: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/demos/771-Good-Luck-Charm: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/music/experiments/2346-bowed-bass: got ClientOSError: Cannot write to closing transport
WARNING:pushl.entries:Entry http://beesbuzz.biz/blog/7902-More-Authl-thoughts: got ServerDisconnectedError: None
INFO:pushl.webmentions:Sending Webmention http://beesbuzz.biz/blog/3743-More-fun-with-Webmentions -> https://webmention.io/
WARNING:pushl.entries:Entry https://beesbuzz.biz/comics/journal/555-Resolutions-for-2016: got ServerDisconnectedError: None
WARNING:pushl.entries:Entry https://beesbuzz.biz/comics/journal/839-July-1-2017-Re-refactor: got ServerDisconnectedError: None
WARNING:pushl.entries:Entry https://beesbuzz.biz/blog/6665-Some-more-site-template-update-thinguses: got ServerDisconnectedError: None
There is probably something hidden in the aiohttp.TCPConnector
thing to make this retry or something.
A lot of IndieWeb folks are moving away from atom/rss and towards h-feed. If a feed fails to parse using feedparser
, use https://github.com/microformats/mf2py to parse the feed out and do the same logic.
Also add support for <link rel="feed">
for feed discovery in recursive mode.
Remember to support WebSub, which should also probably be added to content pages as well.
Version number should live in the pushl module itself, and should be imported by setup.py
Line 30 in 0ad0353
Right now if the content-type doesn't declare an encoding, it assumes ISO-8859-1. While this is probably fine for everything Pushl needs, technically it should decode as US-ASCII (or similar) and then look for a <meta charset>
or <meta http-equiv>
that declares the character encoding.
To support fed.brid.gy it's helpful to be able to send synthetic webmentions. Possible usage:
pushl http://example.com/feed -m http://fed.brid.gy
remote: ERROR:asyncio:Task exception was never retrieved
remote: future: <Task finished coro=<Pushl.send_webmention() done, defined at /home/fluffy/.local/share/virtualenvs/beesbuzz.biz-RK9-tIok/lib/python3.7/site-packages/pushl/__init__.py:162> exception=AttributeError("'Target' object has no attribute 'href'")>
remote: Traceback (most recent call last):
remote: File "/home/fluffy/.local/share/virtualenvs/beesbuzz.biz-RK9-tIok/lib/python3.7/site-packages/pushl/__init__.py", line 178, in send_webmention
remote: await target.send(self, entry)
remote: File "/home/fluffy/.local/share/virtualenvs/beesbuzz.biz-RK9-tIok/lib/python3.7/site-packages/pushl/webmentions.py", line 159, in send
remote: LOGGER.debug("%s -> %s via %s %s", entry.url, self.href,
remote: AttributeError: 'Target' object has no attribute 'href'
I think this was already fixed by 4836b49 but it's worth double-checking.
With Python 3.10, I'm getting
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-174' coro=<Pushl.send_webmention() done, defined at /home/tomi/.local/pipx/venvs/pushl/lib/python3.10/site-packages/pushl/__init__.py:180> exception=TypeError("shield() got an unexpected keyword argument 'loop'")>
Traceback (most recent call last):
File "/home/tomi/.local/pipx/venvs/pushl/lib/python3.10/site-packages/pushl/__init__.py", line 190, in send_webmention
target, code, cached = await webmentions.get_target(self, dest)
File "/home/tomi/.local/pipx/venvs/pushl/lib/python3.10/site-packages/async_lru.py", line 212, in wrapped
return (yield from asyncio.shield(fut, loop=_loop))
TypeError: shield() got an unexpected keyword argument 'loop'
To better support concurrency, add a Cache.lock
primitive which would look something like
with cache.lock(prefix, url) as lock:
previous = lock.get(schema_version)
# ...
lock.save(current)
While a lock is held on a file, any other attempt at acquiring that lock should block.
The actual run loop has gotten fragile and messy.
Each feed should simply get the list of entries, plus the list of entries which were in the previous cached version.
Each entry should simply get the links, xored with the links which were in the previous cached version.
Each ping should only be sent once.
There probably isn’t really a good reason to be running so many threads. Maybe do one active connection per domain, and use that as the gating mechanism for aiohttp.
Basically I feel like a lot of the guts need to be torn out and replaced now that I have a better idea of what I’m doing.
At least in the case of Pingback, most sites do not seem to appreciate getting the same ping multiple times (as opposed to Webmention which makes that part of the spec).
So, if persistent storage is enabled, pushl should only send pingback pings which haven't been sent before.
When a page doesn't use <article>
or h-entry
markup, the entire page is used as the source of outgoing links. It would be helpful for forums or older blogs to be able to specify which containers should be considered as entry content (ignoring e.g. signatures and user profiles).
Right now the href vs link thing is causing a bunch of weird behavior and also the caching on it could be a lot better.
Possible refactoring:
entries.Entry.get_targets
returns a list of (url,href)
pairs (looks like it already does this but the naming is confusing)webmentions.get_target
only takes (and caches) the url
valuewebmentions.Target.__init__
stores the self.canonical
value from the request.url
responsewebmentions.Target._get_endpoint
can override self.canonical
if the document provides a <link rel="canonical">
webmentions.Target.send
takes source,href
parameters, and emits a compatibility warning if href != self.canonical
(but only if self.endpoint is not None
).beesbuzz.biz is accessible from both http://beesbuzz.biz and https://beesbuzz.biz. People subscribe to the Atom feed both ways, so I need to send WebSub notifications for both schemes. But this means that WebMentions get sent for both as well (see http://publ.beesbuzz.biz/blog/730-v0-3-12-now-we-do-Windows for example).
Possible solutions:
Created entry at https://beesbuzz.biz/blog/chatter/4858-Test-pushl-stuff which linked to http://publ.beesbuzz.biz/blog/1069-Pushl-v0-2-5-not-a-joke
pushl then sent a webmention
then deleted entry (giving a 410 GONE status)
pushl did not send update
Save the scheduled and failed pings into an item which gets written to the cache after each ping has completed (after either removing or marking as failed, accordingly). OR these items into the scheduled list when the target list is computed.
This will help with the case that Pushl gets interrupted partway through (or the receiving endpoint happens to be down/erroring at the time), so pings aren’t lost forever.
On occasion, pushl will hang indefinitely and since I use flock -n to prevent multiple instances from running simultaneously it means push notifications stop happening for days at a time until I notice.
The logic for when to let the process exit probably needs work.
Traceback (most recent call last):
File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/feedparser.py", line 398, in __getattr__
return self.__getitem__(key)
File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/feedparser.py", line 356, in __getitem__
return dict.__getitem__(self, key)
KeyError: 'link'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/bin/pushl", line 11, in <module>
sys.exit(main())
File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/pushl/__main__.py", line 123, in main
worker.wait_finished()
File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/pushl/__main__.py", line 60, in wait_finished
queued.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/pushl/__main__.py", line 82, in process_feed
self.submit(self.process_entry, entry.link)
File "/home/fluffy/.local/share/virtualenvs/pushl-jjqXtYO_/lib/python3.6/site-packages/feedparser.py", line 400, in __getattr__
raise AttributeError("object has no attribute '%s'" % key)
AttributeError: object has no attribute 'link'
Running from commit a13a2e7, installed with pipx and without system-site-packages, I get the following errror:
$ pushl -vc "${XDG_CACHE_HOME-$HOME/.cache}/pushl" -e "https://seirdy.one/notes/2023/01/04/against-chasing-growth/"
Traceback (most recent call last):
File "/path/to/pipx/bin/pushl", line 8, in <module>
sys.exit(main())
^^^^^^
File "/path/to/pipx/venvs/pushl/lib64/python3.11/site-packages/pushl/__main__.py", line 128, in main
loop.run_until_complete(_run(args))
File "/usr/lib64/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/path/to/pipx/venvs/pushl/lib64/python3.11/site-packages/pushl/__main__.py", line 163, in _run
_, timed_out = await asyncio.wait(tasks, timeout=args.max_time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/asyncio/tasks.py", line 415, in wait
raise TypeError("Passing coroutines is forbidden, use tasks explicitly.")
TypeError: Passing coroutines is forbidden, use tasks explicitly.
sys:1: RuntimeWarning: coroutine 'Pushl.process_entry' was never awaited
System info: Fedora 37, python 3.11.1.
Cache files should be written out using the atomicwrites
library.
Currently there is no way to whitelist or blacklist rel
attributes on links for webmention sends.
Per RFC 6721 Atom supports a deleted-entry
element, which we should probably support as well.
If a KeyboardInterrupt et al occur in main, it should check the status of the pending tasks and cancel them appropriately and maybe print what was being waited on. Otherwise you get a big list of opaque blobs like:
KeyboardInterrupt
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_feed() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:35> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10f41d0a8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x1113b27c8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x111391678>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_feed() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:35> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x1114811f8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_feed() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:35> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x1113b2be8>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x112175558>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.process_entry() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:80> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x111ec0f18>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.send_webmention() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:110> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x112380e88>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Pushl.send_webmention() done, defined at /Users/fluffy/projects/Pushl/pushl/__init__.py:110> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x111ec0348>()]> cb=[_wait.<locals>._on_completion() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:436]>
This happens when running pushl against an SSL-served RSS/Atom feed or entries. It seems to be an upstream bug in aiohttp, but it could be a problem with connection pooling on the Pushl end.
Output log gets spammed with:
ERROR:asyncio:SSL error in data received
protocol: <asyncio.sslproto.SSLProtocol object at 0xf45838ac>
transport: <_SelectorSocketTransport fd=47 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
File "/usr/lib/python3.7/asyncio/sslproto.py", line 526, in data_received
ssldata, appdata = self._sslpipe.feed_ssldata(data)
File "/usr/lib/python3.7/asyncio/sslproto.py", line 207, in feed_ssldata
self._sslobj.unwrap()
File "/usr/lib/python3.7/ssl.py", line 767, in unwrap
return self._sslobj.shutdown()
Per RL discussion with Marty McGuire and Kitt Hodsen: If I link to something that might go away in the future, it’s useful if Internet Archive has a snapshot of it for later. Internet Archive provides an API for requesting a snapshot, so if so configured, we could have all outgoing link targets be requested.
Comment feeds from phpBB are delivered using chunked encoding, in a way which makes aiohttp barf. Fetching via curl works:
$ curl -i https://songfight.net/forums/app.php/feed/posts
HTTP/1.1 200 OK
Date: Sun, 21 Jun 2020 07:30:08 GMT
Server: Apache
Cache-Control: private, must-revalidate
Set-Cookie: phpbb3_fhvs1_u=1; expires=Mon, 21-Jun-2021 07:30:08 GMT; path=/; domain=.songfight.net; HttpOnly
Set-Cookie: phpbb3_fhvs1_k=; expires=Mon, 21-Jun-2021 07:30:08 GMT; path=/; domain=.songfight.net; HttpOnly
Set-Cookie: phpbb3_fhvs1_sid=da219fd5cb7c6839f6f2ecaae0ce9dd7; expires=Mon, 21-Jun-2021 07:30:08 GMT; path=/; domain=.songfight.net; HttpOnly
Upgrade: h2
Connection: Upgrade
Last-Modified: Sun, 21 Jun 2020 07:21:51 GMT
Cache-Control: max-age=172800
Expires: Tue, 23 Jun 2020 07:30:08 GMT
Vary: User-Agent
Transfer-Encoding: chunked
Content-Type: application/atom+xml
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-gb">
...
but in pushl it fails:
$ pipenv run pushl -rvvvk --rel-exclude '' https://songfight.net/forums/app.php/feed/posts
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:pushl:++WAIT: https://songfight.net/forums/app.php/feed/posts: get feed
DEBUG:pushl.feeds:++WAIT: cache get feed https://songfight.net/forums/app.php/feed/posts
DEBUG:pushl.feeds:++DONE: cache get feed https://songfight.net/forums/app.php/feed/posts
DEBUG:pushl.feeds:++WAIT: request get https://songfight.net/forums/app.php/feed/posts None)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=0)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=1)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=2)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=3)
DEBUG:utils:https://songfight.net/forums/app.php/feed/posts: got error <class 'aiohttp.client_exceptions.ClientPayloadError'> Response payload is not completed (retry=4)
WARNING:utils:https://songfight.net/forums/app.php/feed/posts: Exceeded maximum retries; errors: {'Response payload is not completed'}
DEBUG:pushl.feeds:++DONE: request get https://songfight.net/forums/app.php/feed/posts
ERROR:pushl.feeds:Could not get feed https://songfight.net/forums/app.php/feed/posts: -1
DEBUG:pushl:++DONE: https://songfight.net/forums/app.php/feed/posts: get feed
INFO:pushl.main:Completed all tasks
There is probably some configuration that needs to be sent to aiohttp to make it more tolerant of chunked encoding weirdness.
When fixing #35, it didn't cover the case that the URL in the feed is a link to a redirector, and the old redirection target doesn't redirect to the new target.
An attempted fix in 17fd5ac had the unfortunate effect of breaking support for rel="canonical"
so that has been reverted; a better solution would be to enqueue re-pinging all of the prior targets from the prior URL, rather than just re-pinging everything indiscriminately.
It would be useful if entry processing also discovered related feeds for websub et al. Perhaps a -r/--recurse
parameter could tell process_entry
to also submit any <link rel="alternate">
links for consideration.
In Publ an item can redirect to something on another site, which can then cause recursion to take place on external feeds. It'd be better to restrict -r
traversal to feeds that are on the same domain as a feed that was specified in the original options.
e.g. process_feed
can whitelist domains that process_entry
will recurse into.
Currently all webmentions to the same domain are disabled. Someone might want to be able to self-ping though (for example, for community blogs all hosted on the same domain).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.