forum-informationsfreiheit / offenesparlament Goto Github PK
View Code? Open in Web Editor NEWOffenesParlament.at
License: Other
OffenesParlament.at
License: Other
Steps to reproduce:
Search for current LLP, Suchtyp: Gesetze, Kategorie: "Regierungsvorlage: Bundes(verfassungs)gesetz"
The server seems to understand the query correctly, as it logs this to the console:
INFO:offenesparlament.views.search:Searching <class 'op_scraper.models.Law'> with arguments [{'limit': 50, 'facet_filters': {'category': u'Regierungsvorlage: Bundes(verfassungs)gesetz', 'llps': u'XXV'}, 'offset': 0}]
The same happens for "Vorl. ü. Initiative/Beschluss des Europ. Rates und des Rates" and "Regierungsvorlage: Staatsvertrag"
while /search?llps=XXV and /gesetze/search?only_facets=1&llps=XXV both return results as expected (so there should be data in ES that the more generic /search should find)
When trying to set up an Email alert, I get the message:
"Benachrichtigungen abonnieren
Ihr Abo für Periode XXV: amtsgeheimnis konnte nicht eingerichtet werden. Bitte versuchen Sie es erneut."
I searched for "amtsgeheimnis" on the front page, then tried to set up an alert for that serach
The 'unique_together' meta info for the comitees doesn't work like that. For BR comittees, that might make sense, but since the meta attribute is valid for NR comittees as well, we get duplicates that are the same except for the status. This needs to be fixed, since subsequent scrapes of the person scraper produce errors when trying to update-or-create comittees (since they then produce two results when searching for the respective comittee).
bootstrap.sh fails - apparently in one of the "pip install" stages, with:
No module named pkg_resources
Adding links to Personen or Gesetze within statements made during debates in the Bundesrat and Nationalrat would be great.
For example in
http://offenesparlament.at/debatten/XXV/BR/840 and
http://offenesparlament.at/debatten/XXV/NR/67
Scraper: € 200
Frontend: € 200
Lobbyregister Scrapen und bei Stellungnahmen zu Gesetzesvorschlägen und bei BürgerInneninitiativen prüfen (fuzzy matching), ob sie von im Lobbyregister eingetragenen Organisationen stammen
looks like offenesparlament/offenesparlament/assets
was moved to /clients
if this is right we should fix it in
https://github.com/fin/OffenesParlament/blob/master/offenesparlament/docs/source/frontend.rst
Datendarstellung bzw -zusammenführung
Frontend: € 400
Eine Free-Form Code Bounty!
Hast Du spannende Ideen für Datenvisualisierungen, die man in OffenesParlament.at integrieren könnte?
Kann man verschiedene Bereiche von Parlamentsaktivitäten zusammenführen, die noch nicht miteinander verknüpft sind?
Hier kannst Du’s versuchen!
Die Abnahme dieser Code Bounty erfolgt sehr subjektiv - Wir geben gerne Feedback zu Design-Ideen!
Perhaps set specific version-nr for django-reversion,
django-reversion changes some module paths from 1.8 to latest,
reversion.admin.VersionAdmin
(latest) vs reversion.VersionAdmin
(1.8)
http://django-reversion.readthedocs.org/en/latest/admin.html
http://django-reversion.readthedocs.org/en/release-1.8.1/admin.html
There appears to be an issue with changing the Legislaturperiode in the search.
=> I go to http://offenesparlament.at/debatten/XXV/ and try to search in a previous Gesetzgebungsperiode but there is no option to delete or change the current Gesetzgebungsperiode in the search
The status of schriftliche Anfragen appears to always show "offen", even with old ones for which there is a response. Probably connected to the issue of the broken links,
#64
to replicate the issue, go to http://offenesparlament.at/schlagworte/Wirtschaftspolitik/
=> you see that all Schriftliche Anfragen have the status "offen"
I searched for "hypo" and got several search results with the date 01.01.1970, which does not appear to be correct. maybe we can fix this, or if no date is available, then say "nicht verfügbar" or leave it blank?
Is possible
Subscribing a detail page (like the Person detail page for Rosa Ecker doesn't provide a proper, single request link base on the parl_id (or parl_id and llp for laws), but instead uses the generic 'personen' search, which results in this link:
http://offenesparlament.vm:8000/personen/search?llps=XXV&type=Personen&limit=-1&fieldset=all
That of course isn't correct and yields problems when trying to subscribe and or collect the changes.
This should enable us to create test/dev/deploy configs without having to maintain a completely-seperate local_settings out of repo
further reading:
examples:
thoughts?
When I'm on a detail page (for instance, this one) and change/edit the search bar in any way, i get the search results again (so far i think it's a wanted behaviour). But then these things happen:
This shouldn't happen, I think.
ERROR:root:Error updating op_scraper using default
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 188, in handle_label
self.update_backend(label, using)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 233, in update_backend
do_update(backend, index, qs, start, end, total, verbosity=self.verbosity, commit=self.commit)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 96, in do_update
backend.update(index, current_qs, commit=commit)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/elasticsearch_backend.py", line 166, in update
prepped_data = index.full_prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 212, in full_prepare
self.prepared_data = self.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 203, in prepare
self.prepared_data[field.index_fieldname] = field.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 103, in prepare
raise SearchFieldError("The model '%s' combined with model_attr '%s' returned None, but doesn't allow a default or null value." % (repr(obj), self.model_attr))
SearchFieldError: The model '<Person: Hafenecker Christian, MA>' combined with model_attr 'ts' returned None, but doesn't allow a default or null value.
Traceback (most recent call last):
File "manage.py", line 21, in <module>
run()
File "manage.py", line 14, in run
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 390, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 441, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/rebuild_index.py", line 26, in handle
call_command('update_index', **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 120, in call_command
return command.execute(*args, **defaults)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 441, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 183, in handle
return super(Command, self).handle(*items, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 619, in handle
label_output = self.handle_label(label, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 188, in handle_label
self.update_backend(label, using)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 233, in update_backend
do_update(backend, index, qs, start, end, total, verbosity=self.verbosity, commit=self.commit)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 96, in do_update
backend.update(index, current_qs, commit=commit)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/elasticsearch_backend.py", line 166, in update
prepped_data = index.full_prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 212, in full_prepare
self.prepared_data = self.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 203, in prepare
self.prepared_data[field.index_fieldname] = field.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 103, in prepare
raise SearchFieldError("The model '%s' combined with model_attr '%s' returned None, but doesn't allow a default or null value." % (repr(obj), self.model_attr))
haystack.exceptions.SearchFieldError: The model '<Person: Hafenecker Christian, MA>' combined with model_attr 'ts' returned None, but doesn't allow a default or null value.
The current petitions scraper seems to only scan the last 1000 petitions. It seems the URL-Options aren't properly set. Compare this petition:
http://www.parlament.gv.at/PAKT/VHG/XXV/PET/PET_00009/index.shtml#tab-Zustimmungserklaerungen
The button 'Alle Zustimmungen' has to be clicked, otherwise, only the last 1000 will be shown.
A search only appears to show results related to Gesetze, Anträge or Schriftliche Anfragen but it doesn't appear to include results from debates of the Nationalrat and Bundesrat.
For example, I searched for "hypo" and did not get any results
http://offenesparlament.at/suche/debatten?llps=XXV&q=hypo
I searched for "hypo", all the links to search results appear to be broken
Under this issue, I would like to collect various suggestions that we could make to the administration of Parliament on how they could improve the quality of their data, and thus facilitate re-use.
=> Standardized formatting for dates, especially on the profile pages of MPs (birthday, day of death, Funktionen)
Results should contain an internal_link that links to the detail-page
default install gives me
ProgrammingError: relation "celery_taskmeta" does not exist
LINE 1: ...taskmeta"."hidden", "celery_taskmeta"."meta" FROM "celery_ta...
when running scrapers from the admin console
django-celery still seems to be required for using django-orm as celery results backend
It would be great if we could display the most current party affiliation of an Abgeordnete(r) at http://offenesparlament.at/personen/XXV/ and the top of their profile.
For example: Marcus Franz has been a member of Team Stronach, of ÖVP and now is OK. The top of his profile lists him as Team Stronach
http://offenesparlament.at/personen/PAD_83141/Franz-Marcus-Dr-/
I just ran into the following bug: [https://code.djangoproject.com/ticket/24513]
Upgrade to 1.8.7 fixed it.
I suggest changing requirements.txt django version to: Django>=1.8.0,<1.9
Tnks
When I open the person-search-page and then select the search bar, one request is made to fetch the available facets, as expected. But when I then select 'party' to add a party filter, as soon as I select the party facet, an ajax request like this is made:
http://offenesparlament.vm:8000/personen/search?llps=XXV&type=Personen&party=
This is not only unnecessary, but also quite costly, since it returns a large amount of data that's no necessarily relevant to the list (with the new addition of debates, this can be a request up to 30 MB!). If this behaviour is also existing for laws, the amount of data in need of transferring might even be bigger.
In my opinion, we shouldn't make a new AJAX request to the search view before the user has selected a value for the facet in question (i.e. not before they actually select the party they want to filter by).
CommandError: Conflicting migrations detected (0004_auto_20150814_1941, 0005_keyword__title_urlsafe in op_scraper).
To fix them run 'python manage.py makemigrations --merge'
can you reproduce this?
Given that we have to split the changes we collect for different subscriptions into one of the four categories person, law, debate or search, i added a new field to the SubscribedContent model named category, which I need so set when creating a new subscription. For this I need to know what's being subscribed, so I need a POST-parameter named 'category' which contains one of those four categories:
The first three obviously refer to a single-result-search, the last one can contain search results of any type (I have to figure this out in the subscription changes code myeslf anyways).
Infrastruktur: € 500
Beim Parsen von Texten wollen wir automatisch gewisse Entitäten verlinken.
Infrastruktur, die dies ermöglicht, ist Teil dieser Bounty
Scraper: € 200
Wenn Parlamentarier namentlich erwähnt sind, aber nicht verlinkt sind, sollen sie verlinkt werden.
Scraper: € 200
Rewrite von Parlament-URLs auf OffenesParlament.at URLs
Scraper: 500 €
Frontend: 200 €
Wenn ein Gesetz erwähnt wird, soll es als solches erkannt werden.
Currently, the petitions scraper still throws one or the other exception, for instance:
ERROR:scrapy.core.scraper:Spider error processing <GET http://www.parlament.gv.at/PAKT/VHG/XXV/BI/BI_00058/index.shtml> (referer: None)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/vagrant/offenesparlament/op_scraper/scraper/parlament/spiders/petitions.py", line 174, in parse
petition_creators = self.parse_creators(response)
File "/vagrant/offenesparlament/op_scraper/scraper/parlament/spiders/petitions.py", line 442, in parse_creators
creators = PETITION.CREATORS.xt(response)
File "/vagrant/offenesparlament/op_scraper/scraper/parlament/resources/extractors/petition.py", line 54, in xt
parl_id = creator_sel.xpath("//a/@href").extract()[0].split("/")[2]
IndexError: list index out of range
While it's ok that some things don't work out when scraping, we need to catch all exceptions, or otherwise the Django Reversion stop the database commits, and nothing that was scraped ends up saved.
When reindexing via python manage.py rebuild_index
after a few minutes the following error occurs:
WARNING:elasticsearch:POST http://localhost:9200/haystack/modelresult/_bulk [status:N/A request:1.792s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 74, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 608, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 224, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 558, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 353, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python2.7/httplib.py", line 966, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1000, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 962, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 822, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 798, in send
self.sock.sendall(data)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
ProtocolError: ('Connection aborted.', error(104, 'Connection reset by peer'))
INFO:urllib3.connectionpool:Starting new HTTP connection (2): localhost
This might be linked to chunk size and the error happens around indexing of debate_statement
s
The logging setup for production should be adapted so we/an email-adress of our choice/a mailing-list receives an email with the error and the link whenever a stacktrace happens. This would make debugging errors that occur in production much easier.
Scraper: € 600
Frontend: € 200
Petitionen werden von mindestens fünf ParlamentarierInnen eingebracht
Bürgerinitiativen werden von mindestens 500 BürgerInnen (handschriftlich Unterschrieben) eingebracht
Petitionen & BIs können über die Parlaments-Website “unterschrieben” werden, wenn sie einmal eingebracht wurden
Einstiegsseite: http://www.parlament.gv.at/PAKT/BB/
Beispielseite (BI): http://www.parlament.gv.at/PAKT/VHG/XXV/BI/BI_00083/index.shtml
Beispielseite (Pet): http://www.parlament.gv.at/PAKT/VHG/XXV/PET/PET_00032/index.shtml
Zu scrapen:
Anzahl der Zustimmungen, welche Parteien dafür/dagegen, wenn abgeschlossen:
http://www.parlament.gv.at/PAKT/VHG/XXV/PET/PET_00032/index.shtml#tab-ParlamentarischesVerfahren
Scraper: 2200
Frontend: 1000
Einstiegsseite: http://www.parlament.gv.at/PAKT/STPROT/
Detailsseite (Beispiel): http://www.parlament.gv.at/PAKT/VHG/XXV/NRSITZ/NRSITZ_00072/fnameorig_447647.html
Zu bearbeiten:
Achtung: Aufgrund der Datenmenge, muss ein besonderes Augenmerk auf Performance gelegt werden; falls das Scrapen zu lange dauert, muss eine Lösung zum asychronen Triggern des Scrapens einzelner Protokoll entwickelt werden (um den Server nicht zu überlasten).
Das Einbinden der gescrapeten Statements in ElasticSearch übernimmt das Team von OffenesParlament!
Tasks:
Frontend: € 500
Der Gesetzgebungsprozess ist nicht ohne Tücken. Wie weit ein Gesetz in diesem Prozess ist, und was es noch vor sich hat, wollen wir sinnvoll darstellen können.
Challenges:
Die Abnahme dieser Code Bounty erfolgt sehr subjektiv - Wir geben gerne Feedback zu Design-Ideen!
The debate pages of Bundesrat sessions and Nationalrat sessions need improved formatting:
– pictures of speakers have various sizes
– I guess text of spoken statements should be next to the picture and name of speaker, not below
For debates of the Bundesrat, you have a list of all speakers on top; that is not the case with debates of the Nationalrat.
http://offenesparlament.at/debatten/XXV/NR/67
http://offenesparlament.at/debatten/XXV/BR/840
In vielen Fällen (Mandate, Ausschüsse, Sitzungsprotokolle) wär es gut ein zentrales Model für die Kammern (Nationalrat, Bundesrat) zu haben, um dieses mit anderen Models verbinden zu können.
Das Model Function ist im Moment mM nach ein bischen unschön (und lässt sich daher nicht so gut für andere Codeteile verwenden, zB Ausschüsse)
Ein paar Ideen:
(actually an array of strings that contain a JSON array)
eg "['1986-12-17 - 1990-11-04 (XVII)', '1983-05-19 - 1986-12-16 (XVI)', '1979-06-05 - 1983-05-18 (XV)']" instead of simply "1986-12-17 - 1990-11-04 (XVII)"
so,
this is a tricky one.
this - in settings.py - is problematic:
# Import scrapy settings
c = os.getcwd()
os.chdir(str(c) + '/op_scraper/scraper')
d = os.getcwd()
path.append(d)
os.chdir(c)
d = os.getcwd()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'parlament.settings'
especially problematic is the '''os.chdir''', since this changes every import made after the execution of settings.py.
what's currently happening - i think - is that every "import celery" after this imports offenesparlament/celery.py instead of the global celery.py
however, that's not the cause of a problem i'm currently debugging, just an annoying side effect. can we fix this anyway?
I followed the instructions up to vagrant up
. The system packages installation looks good, but during the pip install steps I get the error
==> offenesparlament: Obtaining django-configurations-head from git+https://github.com/jezdez/django-configurations.git@5ece107044#egg=django-configurations-head (from -r requirements.txt (line 30))
==> offenesparlament: Directory /vagrant/src/django-configurations-head already exists, and is not a git clone.
==> offenesparlament: The plan is to install the git repository https://github.com/jezdez/django-configurations.git
==> offenesparlament: Exception:
==> offenesparlament: Traceback (most recent call last):
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 211, in main
==> offenesparlament: status = self.run(options, args)
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 294, in run
==> offenesparlament: requirement_set.prepare_files(finder)
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 334, in prepare_files
==> offenesparlament: functools.partial(self._prepare_file, finder))
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 321, in _walk_req_to_install
==> offenesparlament: more_reqs = handler(req_to_install)
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 433, in _prepare_file
==> offenesparlament: req_to_install.update_editable(not self.is_download)
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 573, in update_editable
==> offenesparlament: vcs_backend.obtain(self.source_dir)
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/vcs/git.py", line 109, in obtain
==> offenesparlament: if self.check_destination(dest, url, rev_options, rev_display):
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/vcs/__init__.py", line 241, in check_destination
==> offenesparlament: prompt[1])
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/utils/__init__.py", line 135, in ask_path_exists
==> offenesparlament: return ask(message, options)
==> offenesparlament: File "/usr/local/lib/python2.7/dist-packages/pip/utils/__init__.py", line 146, in ask
==> offenesparlament: response = input(message)
==> offenesparlament: EOFError: EOF when reading a line
==> offenesparlament: What to do? (i)gnore, (w)ipe, (b)ackup Your response ('exit') was not one of the expected responses: i, w, b
which later leads to this
==> offenesparlament: Successfully installed Django-1.8.6 Jinja2-2.8 MarkupSafe-0.23 Pygments-2.0.2 alabaster-0.7.6 babel-2.1.1 django-debug-toolbar-1.4 django-debug-toolbar-template-timings-0.6.4 docutils-0.12 pytz-2015.7 six-1.10.0 snowballstemmer-1.2.0 sphinx-1.3.1 sphinx-rtd-theme-0.1.9 sqlparse-0.1.18
==> offenesparlament: Traceback (most recent call last):
==> offenesparlament: File "manage.py", line 21, in <module>
==> offenesparlament:
==> offenesparlament: run()
==> offenesparlament: File "manage.py", line 13, in run
==> offenesparlament:
==> offenesparlament: from configurations.management import execute_from_command_line
==> offenesparlament: ImportError
==> offenesparlament: : No module named configurations.management
==> offenesparlament: Traceback (most recent call last):
==> offenesparlament: File "manage.py", line 21, in <module>
==> offenesparlament:
==> offenesparlament: run()
==> offenesparlament: File "manage.py", line 13, in run
==> offenesparlament:
==> offenesparlament: from configurations.management import execute_from_command_line
==> offenesparlament: ImportError
==> offenesparlament: :
==> offenesparlament: No module named configurations.management
==> offenesparlament: Traceback (most recent call last):
==> offenesparlament: File "./manage.py", line 21, in <module>
==> offenesparlament:
==> offenesparlament: run()
==> offenesparlament: File "./manage.py", line 13, in run
==> offenesparlament:
==> offenesparlament: from configurations.management import execute_from_command_line
==> offenesparlament: ImportError
==> offenesparlament: :
==> offenesparlament: No module named configurations.management
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Any ideas on what could be going wrong?
When trying to access schriftliche Anfragen, for example those listed under a specific Schlagwort, the links to the individual Anfragen appear to be broken:
=> http://offenesparlament.at/schlagworte/Wirtschaftspolitik/
=> click on any "Schriftliche Anfrage"
point 8 on the vagrant setup instructions is to run grunt in /vagrant, however
(1) it complains that grunt-contrib-watch is not installed
(2) after npm install, it complains that ruby and sass are needed for things to work:
Warning:
You need to have Ruby and Sass installed and in your PATH for this task to work.
More info: https://github.com/gruntjs/grunt-contrib-sass
Used --force, continuing.
Warning: spawn ENOENT Used --force, continuing.
Warning: spawn ENOENT Used --force, continuing.
could you shed a light on this?
This is not important. But would it make sense to bundle our current ES search endpoints and more formally make it the first part of our API?
So instead of /search, /gesetze/search, /personen/search we would have something like /api/v1/search/[searchtype] ?
Scraper: 400
Frontend: 400
Ausschuss-Liste plus Verbindung zwischen Ausschuss-Model und Parlamentarier-Model herstellen
Einstiegsseite: http://www.parlament.gv.at/PAKT/AUS/
Detailseite: http://www.parlament.gv.at/PAKT/VHG/XXV/A-AU/A-AU_00001_00361/index.shtml
Tips:
Zu Beachten:
Beispielseite für Ausschussmitglieder: http://www.parlament.gv.at/PAKT/VHG/XXV/A-AU/A-AU_00001_00361/MIT_00361.html
oder
http://www.parlament.gv.at/WWER/PAD_51564/#tab-Ausschuesse
Eher von der Parlamentarier-Seite ausgehen, Achtung: Posten können in der Vergangenheit liegen. Bitte diese dann trotzdem parsen (:. Generell wird es sich anbieten, für Ausschusszugehörigkeit ein eigenes Model zu erstellen (zb. Membership), da dann auch ein Start- und End-Datum der Zugehörigkeit pro Person/Ausschuss möglich ist.
Tasks:
Scraper: 500
Frontend: 100
Tasks:
Beispielseite Tagesordnungen
http://www.parlament.gv.at/PAKT/VHG/XXV/A-AU/A-AU_00001_00361/index.shtml#tab-Sitzungsueberblick
Beispielseite Verhandlungsgegenstände
http://www.parlament.gv.at/PAKT/VHG/XXV/A-AU/A-AU_00001_00361/index.shtml#tab-Verhandlungsgegenstaende
Beispielseite Veröffentlichungen
http://www.parlament.gv.at/PAKT/VHG/XXV/A-AU/A-AU_00001_00361/index.shtml#tab-VeroeffentlichungenBerichte
Einfach auf Ausschuss-Seite die Listen für die drei Kategorien ausgeben
Scraper: € 600
Frontend: € 400
Einstiegsseite: http://www.parlament.gv.at/PAKT/JMAB/
Detailseite (Beispiel): http://www.parlament.gv.at/PAKT/VHG/XXV/J/J_06430/index.shtml
Detailseite (Beispiel, beantwortet): http://www.parlament.gv.at/PAKT/VHG/XXV/J/J_05835/index.shtml
Detailseite 2 (Beantwortung): http://www.parlament.gv.at/PAKT/VHG/XXV/AB/AB_05632/index.shtml
Zu Beachten:
Tips:
Challenge:
Scraper: € 1.800 (€ 600 ohne OCR, € 1800 mit OCR)
Frontend: € 600
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.