Git Product home page Git Product logo

hcf-backend's Introduction

HCF (HubStorage Crawl Frontier) Backend for Frontera

When used with scrapy, use it with Scrapy Scheduler provided by scrapy-frontera. Scrapy scheduler provided by Frontera is not supported. scrapy-frontera is a scrapy scheduler which allows to use frontera backends, like the present one, with scrapy projects.

See specific usage instructions at module and class docstrings at backend.py. Some examples of usage can be seen in the scrapy-frontera README.

A complete tutorial for using hcf-backend with ScrapyCloud workflows is available at shub-workflow Tutorial: Managing Hubstorage Crawl Frontiers. shub-workflow is a framework for defining workflows of spiders and scripts running over ScrapyCloud. This is a strongly recommended lecture, because it documents the integration of different tools which together provide the best benefit.

Package also provides a convenient command line tool for hubstorage frontier handling and manipulation: hcfpal.py. It supports dumping, count, deletion, moving, listing, etc. See command line help for usage.

Another provided tool is crawlmanager.py. It facilitates the scheduling of consumer spider jobs. Examples of usage are also available in the already mentioned shub-workflow Tutorial.

Installation

pip install hcf-backend

Development environment setup

For hcf-backend developers, Pipfile files are provided for a development environment.

Run:

$ pipenv install --dev
$ pipenv shell
$ cp .envtemplate .env

and edit .env accordingly

hcf-backend's People

Contributors

burnzz avatar eliasdorneles avatar immerrr avatar kalessin avatar markbaas avatar noviluni avatar nramirezuy avatar seagatesoft avatar sibiryakov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hcf-backend's Issues

AttributeError: 'FrontierManager' object has no attribute 'extra'

I have not investigated this yet:

(diffeo)stav@platu:~/Workspace/sh/Diffeo/diffeo-netsec$ scrapy crawl blackhat
/home/stav/.virtualenvs/diffeo/src/scrapy/scrapy/contrib/linkextractors/sgml.py:107: ScrapyDeprecationWarning: SgmlLinkExtractor is deprecated and will be removed in future releases. Please use scrapy.contrib.linkextractors.LinkExtractor
  ScrapyDeprecationWarning
2015-03-12 12:21:05-0600 [scrapy] INFO: Scrapy 0.25.1 started (bot: netsec)
2015-03-12 12:21:05-0600 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-03-12 12:21:05-0600 [scrapy] INFO: Overridden settings: {'COOKIES_ENABLED': False, 'USER_AGENT': 'Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5', 'MEMUSAGE_REPORT': True, 'DOWNLOAD_DELAY': 4, 'REDIRECT_MAX_TIMES': 3, 'MEMDEBUG_ENABLED': True, 'RETRY_ENABLED': False, 'HTTPCACHE_ENABLED': True, 'CONCURRENT_REQUESTS_PER_IP': 1, 'MEMUSAGE_LIMIT_MB': 512, 'DEPTH_PRIORITY': 10, 'CONCURRENT_REQUESTS': 1, 'DOWNLOAD_WARNSIZE': 5242880, 'SPIDER_MODULES': ['netsec.spiders'], 'BOT_NAME': 'netsec', 'CONCURRENT_ITEMS': 10, 'NEWSPIDER_MODULE': 'netsec.spiders', 'ROBOTSTXT_OBEY': True, 'CONCURRENT_REQUESTS_PER_DOMAIN': 1, 'DOWNLOAD_MAXSIZE': 10485760, 'MEMUSAGE_ENABLED': True, 'SCHEDULER': 'crawlfrontier.contrib.scrapy.schedulers.frontier.CrawlFrontierScheduler', 'MEMUSAGE_WARNING_MB': 400, 'MEMUSAGE_NOTIFY_MAIL': '[email protected]'}
2015-03-12 12:21:05-0600 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, MemoryUsage, CoreStats, MemoryDebugger, SpiderState
2015-03-12 12:21:05-0600 [scrapy] INFO: Enabled downloader middlewares: NoExternalReferersMiddleware, RobotsTxtCustomMiddleware, RobotsTxtMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, RotateUserAgentMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, DenyDomainsMiddleware, CrawleraMiddleware, BanMiddleware, ChunkedTransferMiddleware, DownloaderStats, HttpCacheMiddleware, SchedulerDownloaderMiddleware
No handlers could be found for logger "streamcorpus"
2015-03-12 12:21:05-0600 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware, ExporterMiddleware, SchedulerSpiderMiddleware
2015-03-12 12:21:05-0600 [scrapy] INFO: Enabled item pipelines:
2015-03-12 12:21:05-0600 [blackhat] DEBUG: DOWNLOADER_MIDDLEWARES_BASE: {'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400, 'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300, 'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700, 'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500, 'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830, 'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900, 'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850, 'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590, 'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580, 'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600, 'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550, 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750, 'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100, 'scrapy.contrib.downloadermiddleware.ajaxcrawl.AjaxCrawlMiddleware': 560, 'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350}
2015-03-12 12:21:05-0600 [blackhat] DEBUG: DOWNLOADER_MIDDLEWARES: {'netsec.middlewares.BanMiddleware': 800, 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None, 'netsec.middlewares.RotateUserAgentMiddleware': 400, 'netsec.middlewares.RobotsTxtCustomMiddleware': 90, 'netsec.middlewares.DenyDomainsMiddleware': 620, 'crawlfrontier.contrib.scrapy.middlewares.schedulers.SchedulerDownloaderMiddleware': 999, 'netsec.middlewares.NoExternalReferersMiddleware': 50, 'scrapylib.crawlera.CrawleraMiddleware': 650}
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware', None)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('netsec.middlewares.NoExternalReferersMiddleware', 50)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('netsec.middlewares.RobotsTxtCustomMiddleware', 90)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware', 100)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware', 300)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware', 350)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('netsec.middlewares.RotateUserAgentMiddleware', 400)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.retry.RetryMiddleware', 500)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware', 550)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.ajaxcrawl.AjaxCrawlMiddleware', 560)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware', 580)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware', 590)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware', 600)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('netsec.middlewares.DenyDomainsMiddleware', 620)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapylib.crawlera.CrawleraMiddleware', 650)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware', 700)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware', 750)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('netsec.middlewares.BanMiddleware', 800)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware', 830)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.stats.DownloaderStats', 850)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware', 900)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Downloader Middleware: ('crawlfrontier.contrib.scrapy.middlewares.schedulers.SchedulerDownloaderMiddleware', 999)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: SPIDER_MIDDLEWARES_BASE: {'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50, 'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700, 'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900, 'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500, 'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800}
2015-03-12 12:21:05-0600 [blackhat] DEBUG: SPIDER_MIDDLEWARES: {'streamitem.middlewares.ExporterMiddleware': 950, 'crawlfrontier.contrib.scrapy.middlewares.schedulers.SchedulerSpiderMiddleware': 999}
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider Middleware: ('scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware', 50)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider Middleware: ('scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware', 500)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider Middleware: ('scrapy.contrib.spidermiddleware.referer.RefererMiddleware', 700)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider Middleware: ('scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware', 800)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider Middleware: ('scrapy.contrib.spidermiddleware.depth.DepthMiddleware', 900)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider Middleware: ('streamitem.middlewares.ExporterMiddleware', 950)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider Middleware: ('crawlfrontier.contrib.scrapy.middlewares.schedulers.SchedulerSpiderMiddleware', 999)
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider field: custom_settings <type 'dict'> {'DEPTH_LIMIT': 3, 'DOWNLOAD_DELAY': 5}
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider field: start_urls <type 'list'> ['http://blackhat.com/']
2015-03-12 12:21:05-0600 [blackhat] DEBUG: Spider field: target_domains <type 'list'> ['blackhat.com']
2015-03-12 12:21:05-0600 [blackhat] INFO: Spider opened
2015-03-12 12:21:05-0600 [-] ERROR: Unhandled error in Deferred:
2015-03-12 12:21:05-0600 [-] Unhandled Error
    Traceback (most recent call last):
      File "/home/stav/.virtualenvs/diffeo/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1253, in unwindGenerator
        return _inlineCallbacks(None, gen, Deferred())
      File "/home/stav/.virtualenvs/diffeo/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1107, in _inlineCallbacks
        result = g.send(result)
      File "/home/stav/.virtualenvs/diffeo/src/scrapy/scrapy/crawler.py", line 53, in crawl
        yield self.engine.open_spider(self.spider, start_requests)
      File "/home/stav/.virtualenvs/diffeo/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1253, in unwindGenerator
        return _inlineCallbacks(None, gen, Deferred())
    --- <exception caught here> ---
      File "/home/stav/.virtualenvs/diffeo/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1107, in _inlineCallbacks
        result = g.send(result)
      File "/home/stav/.virtualenvs/diffeo/src/scrapy/scrapy/core/engine.py", line 220, in open_spider
        scheduler = self.scheduler_cls.from_crawler(self.crawler)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/contrib/scrapy/schedulers/frontier.py", line 85, in from_crawler
        return cls(crawler)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/contrib/scrapy/schedulers/frontier.py", line 81, in __init__
        self.frontier = ScrapyFrontierManager(frontier_settings)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/utils/managers.py", line 18, in __init__
        self.manager = FrontierManager.from_settings(settings)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/core/manager.py", line 137, in from_settings
        settings=manager_settings)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/core/manager.py", line 82, in __init__
        self._backend = self._load_object(backend)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/core/manager.py", line 399, in _load_object
        return self._load_frontier_object(obj_class)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/core/manager.py", line 406, in _load_frontier_object
        return obj_class.from_manager(self)
      File "/home/stav/.virtualenvs/diffeo/src/crawl-frontier/crawlfrontier/contrib/backends/memory/__init__.py", line 22, in from_manager
        return cls(manager)
      File "/home/stav/.virtualenvs/diffeo/src/hcf-backend/hcf_backend/backend.py", line 170, in __init__
        params = ParameterManager(manager)
      File "/home/stav/.virtualenvs/diffeo/src/hcf-backend/hcf_backend/utils.py", line 47, in __init__
        self.scrapy_settings = get_scrapy_settings(manager.extra)
    exceptions.AttributeError: 'FrontierManager' object has no attribute 'extra'

--project-id argument isn't considered by HCFPalScript

Issue

HCFPalScript doesn't consider different project ids passed via --project-id.

This is probably happening because the HCFPal() instance created inside HCFPalScript.__init__() isn't instantiated with the project id collected from the arguments in HCFPalScript.

Reproduce

$ python bin/hcfpal.py --project-id <project-id> list               
SHUB_JOBKEY not set: not running on ScrapyCloud.
Listing frontiers in project <project-id>:
    [frontiers from the default project id in scrapinghub.yml, not from  <project-id>]

Prioritize links in hcf-backend

          thanks for quick response, how exactly those slots can be leveraged to influence the order of request process? 

I have 1 spider with both producer and consumer settings as shown below:

{'HCF_AUTH': _scrapy_cloud_key,
'HCF_PROJECT_ID': project_id,
'HCF_PRODUCER_FRONTIER': 'frontier',
'HCF_PRODUCER_NUMBER_OF_SLOTS': 1,
'HCF_PRODUCER_BATCH_SIZE': 300,
'HCF_PRODUCER_SLOT_PREFIX': 'links',
'HCF_CONSUMER_FRONTIER': 'frontier',
'HCF_CONSUMER_SLOT': 'links0',}

I am running spider for an interval of 10 minutes at depth 1. For some urls it finishes before 10 minutes but some url's takes more time so consumer part of crawler is not consuming all the links. The issue I am facing is when I am running spider more than one time it is not starting from start url but from a url which is previously saved to frontier (reading frontier batch before start url). Also how many slots can be created inside a frontier?

Originally posted by @Nishant-Bansal-777 in #26 (comment)

Job settings aren't passed to jobs scheduled via HCFCrawlManager

Issue

No job settings are passed to jobs scheduled with HCFCrawlManager, only Frontera settings. Job settings can still be sent to this manager via the script argument --job-settings as it inherits from CrawlManager, but they aren't used.

Reproduce

I used the MyArticlesGraphManager from https://github.com/scrapinghub/shub-workflow/wiki/Graph-Managers-with-HCF (adapted to my project), and the scrapers task with the consumers didn't work as expected, as the consumer_settings weren't provided to the consumer spiders. For that example, it meant that the start requests weren't skipped.

HCFCrawlManager should only consider jobs that are part of the workflow

Background

HCFCrawlManager's main workflow loop checks running or pending jobs of the same spider to determine which slots are available.

    def workflow_loop(self):
        available_slots = self.print_frontier_status()

        running_jobs = 0
        states = "running", "pending"
        for state in states:
            for job in self.get_project().jobs.list(
                spider=self.args.spider, state=state, meta="spider_args"
            ):
                frontera_settings_json = json.loads(
                    job["spider_args"].get("frontera_settings_json", "{}")
                )
                if "HCF_CONSUMER_SLOT" in frontera_settings_json:
                    slot = frontera_settings_json["HCF_CONSUMER_SLOT"]
                    if slot in available_slots:
                        available_slots.discard(slot)
                        running_jobs += 1

        ...

Issue

That loop doesn't consider whether or not those jobs belong to the same workflow as the root script. Jobs of the same spider run outside of HCFCrawlManager or in another instance of HCFCrawlManager (using a different frontier, for example) are also considered. The first case is problematic because jobs might not have spider_args, and the later call job["spider_args"] will throw a KeyError. The second case is problematic because we might remove slots from the available list if they use the same names, even if they use another frontier.

Replicate

I ran a similar script as MyArticlesGraphManager described in https://github.com/scrapinghub/shub-workflow/wiki/Graph-Managers-with-HCF, and then when it was in the consumers/scrapers stage, I ran a regular job of the same spider outside the script. py:hcf_crawlmanager.py crashed because it considered that job, which didn't have a spider_args argument, and couldn't recover from it.

Prioritize hcf backend links

I am trying to set priority in spider requests, with priority=10 for some url's. But frontier does not seem to obey this rule, rather it is doing FIFO order. Please suggest how to use priority along with hcf-backend.

convert_from_bytes fails handling scrapy Headers properly

Version: 68e9cd9
Steps to reproduce:

>>> from hcf_backend.utils import convert_from_bytes
>>> from scrapy.http.headers import Headers
>>> hdrs = Headers()
>>> hdrs['foo'] = 'bar'
>>> hdrs
{'Foo': ['bar']}
>>> convert_from_bytes(hdrs)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "hcf_backend/utils/__init__.py", line 14, in convert_from_bytes
    return data_type(map(convert_from_bytes, data))
  File "/home/pengyu/temp/env2/lib/python2.7/site-packages/scrapy/http/headers.py", line 12, in __init__
    super(Headers, self).__init__(seq)
  File "/home/pengyu/temp/env2/lib/python2.7/site-packages/scrapy/utils/datatypes.py", line 193, in __init__
    self.update(seq)
  File "/home/pengyu/temp/env2/lib/python2.7/site-packages/scrapy/utils/datatypes.py", line 229, in update
    super(CaselessDict, self).update(iseq)
  File "/home/pengyu/temp/env2/lib/python2.7/site-packages/scrapy/utils/datatypes.py", line 228, in <genexpr>
    iseq = ((self.normkey(k), self.normvalue(v)) for k, v in seq)
ValueError: too many values to unpack

Potential problem with reading batches when batches are deleted only on consumer close?

Good day. Let's say we have a million requests inside a slot, then consumer defines either HCF_CONSUMER_MAX_REQUESTS = 15000 or HCF_CONSUMER_MAX_BATCHES = 150 or it just closes itself at after N hours. Then it also defines HCF_CONSUMER_DELETE_BATCHES_ON_STOP = True, so it only purges batches upon exiting.

In this case, since as far as I can tell there is no pagination for scrapycloud_frontier_slot.queue.iter(mincount) the consumer will be iteration only the initial MAX_NEXT_REQUESTS reading them over and over till it reaches either max requests / max batches / self enforced time limit, won't it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.