Git Product home page Git Product logo

data_cube_ui's People

Contributors

aarifsk avatar alfredo-ama avatar ceos-seo avatar jcrattz avatar johnrattz avatar kfox-ama avatar otto-ama avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data_cube_ui's Issues

Issue in adding product

I received deprecation err or (related to numpy.array, dtype= datetime64[ns]) while ingesting . hence, i updated all the packages. Now i started from beginning by again adding the product type and i am unble to add. i am getting the following error.
**(cubeenv) D:\datacube-core-develop>datacube product add D:\datacube-core-develop
\docs\config_samples\dataset_types\ls5LITP.yaml
Traceback (most recent call last):
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init_.py",
line 570, in build_master
ws.require(requires)
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init
.py",
line 888, in require
needed = self.resolve(parse_requirements(requirements))
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init_.py",
line 779, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (datacube 1.5.4 (d:\data_cube\envs\cubeenv\lib\si
te-packages), Requirement.parse('datacube==0+unknown'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\DATA_CUBE\envs\cubeenv\Scripts\datacube-script.py", line 6, in
from pkg_resources import load_entry_point
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init_.py",
line 3095, in
@call_aside
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init
.py",
line 3079, in call_aside
f(*args, **kwargs)
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init
.py",
line 3108, in _initialize_master_working_set
working_set = WorkingSet.build_master()
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init
.py",
line 572, in _build_master
return cls.build_from_requirements(requires)
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init
.py",
line 585, in build_from_requirements
dists = ws.resolve(reqs, Environment())
File "D:\DATA_CUBE\envs\cubeenv\lib\site-packages\pkg_resources_init
.py",
line 774, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'affine' distribution was not found and
is required by datacube, rasterio**

Installation guide badly formatted

Current version of the UI install guide is hard to follow due to incorrect formatting and inclusion of git diff outputs in there.

I am referring to this document:

https://github.com/ceos-seo/data_cube_ui/blob/master/docs/ui/ui_install.md

These are a few of the problems with the document:

  • broken hyperlinks on the table of contents
  • git conflict outputs are present in the document
  • badly formatted sections - everything after the sample apache conf file shows up as a code block

No mudole named setuptools Error

I have been following the installation documentation carefully and when I was running the setup.py, I got the error "No mudole named setuptools", how do I move forward from here?

Apache2 - 500 Internal Error

On every visit of the site i get the error "500 Internal Error".

The following entrys appear in "/var/log/apache2/error.log":
[Mon Feb 11 12:40:50.022136 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] django.setup(set_prefix=False) [Mon Feb 11 12:40:50.022152 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "/home/datacube/Datacube/datacube_env/lib/python3.6/site-packages/django/__init__.py", line 22, in setup [Mon Feb 11 12:40:50.022161 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) [Mon Feb 11 12:40:50.022178 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "/home/datacube/Datacube/datacube_env/lib/python3.6/site-packages/django/conf/__init__.py", line 56, in __getattr__ [Mon Feb 11 12:40:50.022187 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] self._setup(name) [Mon Feb 11 12:40:50.022203 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "/home/datacube/Datacube/datacube_env/lib/python3.6/site-packages/django/conf/__init__.py", line 41, in _setup [Mon Feb 11 12:40:50.022212 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] self._wrapped = Settings(settings_module) [Mon Feb 11 12:40:50.022228 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "/home/datacube/Datacube/datacube_env/lib/python3.6/site-packages/django/conf/__init__.py", line 110, in __init__ [Mon Feb 11 12:40:50.022237 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] mod = importlib.import_module(self.SETTINGS_MODULE) [Mon Feb 11 12:40:50.022270 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "/home/datacube/Datacube/datacube_env/lib/python3.6/importlib/__init__.py", line 126, in import_module [Mon Feb 11 12:40:50.022280 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] return _bootstrap._gcd_import(name[level:], package, level) [Mon Feb 11 12:40:50.022296 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "<frozen importlib._bootstrap>", line 994, in _gcd_import [Mon Feb 11 12:40:50.022313 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "<frozen importlib._bootstrap>", line 971, in _find_and_load [Mon Feb 11 12:40:50.022330 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked [Mon Feb 11 12:40:50.022347 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed [Mon Feb 11 12:40:50.022365 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "<frozen importlib._bootstrap>", line 994, in _gcd_import [Mon Feb 11 12:40:50.022381 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "<frozen importlib._bootstrap>", line 971, in _find_and_load [Mon Feb 11 12:40:50.022398 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked [Mon Feb 11 12:40:50.022430 2019] [wsgi:error] [pid 4225:tid 139682675848960] [remote 132.187.202.59:53302] ModuleNotFoundError: No module named 'data_cube_ui'

Could someone help me to figure out the problem?

Help needed contact me @[email protected]

Dear All,

If anybody facing issues regarding the installation of the Datacube core and datacube ui, people can contact me on [email protected]. I have recently installed both on independent PC for testing purpose. I have been working on installation around three weeks to get it done. i faced so many error's and rectified. Here i used ubuntu 20.04 LTS, datacube 1.8.7 dev and datacube ui latest one from github with gdal 3.4.1.

regards,
Raju Devender

tool creation

Is it possible to create a tool with only one date as input?

no load ingested areas in the datacube visualization page

there are many data in the progresql database. and they are also got and listed in the dataset viewer page.
however, it can not be returned to the visualization page. and the result is null from the function load_ingested_areas() in visualization file.
can you help me to solve this problem?

Problem with final step of installation Open Data Cube Web UI

I would like to install Open Data Cube Web UI from CEOS repository guide (https://github.com/ceos-seo/data_cube_ui/blob/master/docs/ui_guide.md#install_ssh) I followed all the steps but the system gave me an error when I tried to execute:

source datacube_env/bin/activate

The error is: "no such file or directory" I also went to "localhost:8000" on my browser, while installation steps, but I got error (cause I think that I didn't run the server).

How can I use Open Data Cube Web UI?

Bump django to 3.2

To begin with, I appreciate maintainers of this cool project.
Although I have opened this as an issue, you may consider it a discussion or a question! Sometime back I was experimenting with datacube ui and in the process had to re-factor a lot to bump django from 1.9 to 3.1. Thus, I understand my request could be an uphill task. I'm aware PRs #49 and #22 are working towards django 2 which is a great thing. However, I find it absurd having to go through django 2 then to django 3.

UI not pulling back acquisitions

I have a set of 3 Landsat-8 images in my data cube, and I am trying to process them as a custom mosaic through the Data Cube UI. However when I use either the inbuilt satellite platform or try to define my own I'm getting the following alert 'There are no acquistions for this parameter set.' I've double checked the min and max lat and long is in range, the start and end date. How should I next diagnose what the issue is?

list index out of range

I followed with the ui installation guide and everything seem to be ok but when i submit, there is never any progress in the Running tasks. Then i run "sudo /etc/init.d/data_cube_ui stop" to stop two celery processes and run 'celery -A data_cube_ui worker -l info' to view the error in console. It show this error

[2018-09-11 06:12:31,272: ERROR/ForkPoolWorker-4] Chord callback for '175f8597-3cab-43e4-adc2-40356e29ed37' raised: ChordError("Dependency e38cc2a7-938c-458b-b9c5-f5938818baa2 raised IndexError('list index out of range',)",)
Traceback (most recent call last):
File "/home/lolo/Datacube/datacube_env/lib/python3.5/site-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/lolo/Datacube/datacube_env/lib/python3.5/site-packages/celery/app/trace.py", line 641, in protected_call
return self.run(*args, **kwargs)
File "/home/lolo/Datacube/data_cube_ui/apps/water_detection/tasks.py", line 332, in recombine_geographic_chunks
geo_chunk_id = total_chunks[0][2]['geo_chunk_id']
IndexError: list index out of range

I used "LE071950542015121201T1-SC20170427222707 dataset" as shown in the guide.
I tried to google about this error, unfortunately, there are no clue about it.

Indexing Error

I'm performing indexing on the Landsat dataset provided on Ingestion page.
After running:
datacube dataset add /datacube/original_data/LE071950542015121201T1-SC20170427222707/datacube-metadata.yaml
it results in an Error: Unable to create dataset.
Indexing datasets [####################################] 100%2018-09-10 01:13:54,120 2159 datacube-dataset ERROR Unable to create Dataset for file:///datacube/original_data/LE071950542015121201T1-SC20170427222707/datacube-metadata.yaml: No matching Product found for {
"id": "d490c56a-bb47-4a1a-b58b-e5b45332bbb8",
"grid_spatial": {
"projection": {
"geo_ref_points": {
"ur": {
"x": 678315.0,
"y": 1064415.0
},
Will you please suggest me a solution. Thanks in advance.

Reprojection Error with Test Data (LS7-SR Data)

During Step: Testing the Indexed Data
On Python console:
data_full = dc.load(product='ls7_collections_sr_scene', output_crs='EPSG:4326', resolution=(-0.00027,0.00027))
gives error
`Traceback (most recent call last):
File "", line 1, in
File "/opt/anaconda3/lib/python3.6/site-packages/datacube/api/core.py", line 317, in load
geobox = geometry.GeoBox.from_geopolygon(query_geopolygon(**query) or get_bounds(observations, crs),

File "/opt/anaconda3/lib/python3.6/site-packages/datacube/api/core.py", line 582, in get_bounds
left = min([d.extent.to_crs(crs).boundingbox.left for d in datasets])
File "/opt/anaconda3/lib/python3.6/site-packages/datacube/api/core.py", line 582, in
left = min([d.extent.to_crs(crs).boundingbox.left for d in datasets])
File "/opt/anaconda3/lib/python3.6/site-packages/datacube/utils/geometry.py", line 465, in to_crs
clone.Transform(transform)
File "/opt/anaconda3/lib/python3.6/site-packages/osgeo/ogr.py", line 7310, in Transform
return _ogr.Geometry_Transform(self, *args)
TypeError: in method 'Geometry_Transform', argument 2 of type 'OSRCoordinateTransformationShadow *'
`
All earlier steps were successful with expected output as per your documents.
Kindly suggest a solution/workaround.

if you have both Python 3.5 and 3.6 and the wsgi could not work. It will help.

https://stackoverflow.com/questions/44914961/install-mod-wsgi-on-ubuntu-with-python-3-6-apache-2-4-and-django-1-11

All of the other configurations and installs involved in setting my system up seem to be fine (Running in daemon mode) though mod_wsgi doesn't seem to be using my Python 3.6.1 virtual environment (though it is trying to use it for Django according to the error log)...

Solution:
sudo apt-get remove libapache2-mod-wsgi-py3
Install mod_wsgi using pip, preferably into a Python virtual environment. Ensure pip is for the version of Python you want to use.

pip install mod_wsgi
Display the config to add to Apache configuration file to load this mod_wsgi by running:

mod_wsgi-express module-config
Take the output of above command to display config and add to Apache configuration.

Small Problem with data_cube_ui/apps/dc_algorithm/models/application_models.py

When trying to get the mosiac tool to not do a cloud based clean mask, I came across a problem in the get_clean_mask_func(): in apps/dc_algorithm/models/application_models.py, where if you were to not specify bit_mask or cf_mask it would crash because the return_all_true() has a problem where it is calling .shape on a numpy array as a function:

def return_all_true(ds):
     return np.full(ds[self.get_measurements()[0]].shape(), True)

Should be:

def return_all_true(ds):
     return np.full(ds[self.get_measurements()[0]].shape, True)

without the parenthesis after .shape. This isn't a bug that the mosaic tool would run into normally but I thought I would post it just in case anyone has this happen to them.

I got this when I install gdal library

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-nyka27g1/gdal/
And I do
pip install --upgrade setuptools
and it's up to date but it cant solve the problem.

Can't get the UI to run on apache

Pretty new to all of the tools used here. I am trying to run the UI in a conda environment but without luck. The only way to get it running (and even then there are some errors) is with:

python manage.py runserver 0.0.0.0:8000

Restarting apache and running curl localhost:8000 gives a connection refused when the above isn't running.

The dc_ui.conf is below. I've tried changing the order of these. I've checked the paths and everything seems in order. I've also tried without the <Files wsgi.py> and quotation marks. I've also left 'BASE_HOST' in settings.py to stay localhost since this doesn't change anything.

<VirtualHost *:80>
	# The ServerName directive sets the request scheme, hostname and port that
	# the server uses to identify itself. This is used when creating
	# redirection URLs. In the context of virtual hosts, the ServerName
	# specifies what hostname must appear in the request's Host: header to
	# match this virtual host. For the default virtual host (this file) this
	# value is not decisive as it is used as a last resort host regardless.
	# However, you must set it for any further virtual host explicitly.
	#ServerName www.example.com

	ServerAdmin webmaster@localhost
	DocumentRoot /var/www/html

	# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
	# error, crit, alert, emerg.
	# It is also possible to configure the loglevel for particular
	# modules, e.g.
	#LogLevel info ssl:warn

	ErrorLog ${APACHE_LOG_DIR}/error.log
	CustomLog ${APACHE_LOG_DIR}/access.log combined

	# For most configuration files from conf-available/, which are
	# enabled or disabled at a global level, it is possible to
	# include a line for only one particular virtual host. For example the
	# following line enables the CGI configuration for this host only
	# after it has been globally disabled with "a2disconf".
	#Include conf-available/serve-cgi-bin.conf

	#django static
	Alias /static/ "/home/ubuntu/Datacube/data_cube_ui/static/"

	#results.
	Alias /datacube/ui_results/ "/home/ubuntu/Datacube/ui_results/"


	# django wsgi
	WSGIScriptAlias / /home/ubuntu/Datacube/data_cube_ui/data_cube_ui/wsgi.py
	WSGIDaemonProcess dc_ui python-home=/home/ubuntu/miniconda/envs/cubeenv2/lib/python3.6/site-packages python-path=/home/ubuntu/Datacube/data_cube_ui

	WSGIProcessGroup dc_ui
	WSGIApplicationGroup %{GLOBAL}

	<Directory "/home/ubuntu/Datacube/data_cube_ui/static">
		AllowOverride All
		Require all granted
	</Directory>

	<Directory "/home/ubuntu/Datacube/ui_results/">
		AllowOverride All
		Require all granted
	</Directory>

	<Directory "/home/ubuntu/Datacube/data_cube_ui/data_cube_ui/">
		<Files wsgi.py>
		    AllowOverride All
		    Require all granted
		</Files>
	</Directory>

</VirtualHost>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I'll try to provide more info if needed.

'int' object is not iterable : TypeError at /data_cube_manager/dataset_types/view/1

Hi, I have set up data_cube_ui and ingest landsat images, when I run view definition, I get the error message.
TypeError at /data_cube_manager/dataset_types/view/1
'int' object is not iterable
Request Method: GET
Request URL: http://data_cube_manager/dataset_types/view/1
Django Version: 1.11.13
Exception Type: TypeError
Exception Value:
'int' object is not iterable
Exception Location: /home/localuser/Datacube/data_cube_ui/apps/data_cube_manager/utils.py in forms_from_definition, line 83
Python Executable: /usr/bin/python3
Python Version: 3.6.6
Python Path:
['/home/localuser/Datacube/data_cube_ui',
'/home/localuser/miniconda3/envs/cubeenv/lib/python36.zip',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/lib-dynload',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/datacube-1.6.1+121.ge40ff90.dirty-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/toolz-0.9.0-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/python_dateutil-2.7.4-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/Click-7.0-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/affine-2.2.1-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/snuggs-1.4.2-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/cligj-0.5.0-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/attrs-18.2.0-py3.6.egg',
'/home/localuser/miniconda3/envs/cubeenv/lib/python3.6/site-packages/pyparsing-2.2.2-py3.6.egg',
'/']
Server time: Fri, 9 Nov 2018 17:58:41 -0500

there was an unhandled exception during the processing of your task

Hello,
When i submit,i had no returned results and an alert saying 'There was an unhandled exception during the processing of your task.' is shown. I have checked my logs and all is good it returns true and i have no error. i wanted to see SIMPLE NDVI results over the map.

Thank you

Using Landsat 8 from AWS s3

I try to use Landast 8 ingested from AWS via s3 interface in ODC UI. However processing of data stopped with following exception:

[2019-12-20 01:14:00,944: ERROR/ForkPoolWorker-7] Chord callback for '6b18fe19-a5ba-4ca1-8aae-4e8f1f1aece4' raised: ChordError("Dependency 1da5ae41-cede-4ef7-9476-25fb31004acb raised IndexE
rror('only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices',)",)
Traceback (most recent call last):
  File "/home/localuser/Datacube/datacube_env/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/home/localuser/Datacube/datacube_env/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/localuser/Datacube/data_cube_ui/apps/water_detection/tasks.py", line 299, in processing_task
    no_data=task.satellite.no_data_value)
  File "/home/localuser/Datacube/data_cube_ui/utils/data_cube_utilities/dc_water_classifier.py", line 294, in wofs_classify
    classified_clean[clean_mask] = classified[clean_mask]  # Contains data for clear pixels
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/localuser/Datacube/datacube_env/lib/python3.6/site-packages/celery/backends/redis.py", line 399, in on_chord_part_return
    callback.delay([unpack(tup, decode) for tup in resl])
  File "/home/localuser/Datacube/datacube_env/lib/python3.6/site-packages/celery/backends/redis.py", line 399, in <listcomp>
    callback.delay([unpack(tup, decode) for tup in resl])
  File "/home/localuser/Datacube/datacube_env/lib/python3.6/site-packages/celery/backends/redis.py", line 352, in _unpack_chord_result
    raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 1da5ae41-cede-4ef7-9476-25fb31004acb raised IndexError('only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean
 arrays are valid indices',)
[2019-12-20 01:14:00,948: ERROR/ForkPoolWorker-7] Task water_detection.processing_task[1da5ae41-cede-4ef7-9476-25fb31004acb] raised unexpected: IndexError('only integers, slices (`:`), elli
psis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices',)
Traceback (most recent call last):
  File "/home/localuser/Datacube/datacube_env/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/home/localuser/Datacube/datacube_env/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/localuser/Datacube/data_cube_ui/apps/water_detection/tasks.py", line 299, in processing_task
    no_data=task.satellite.no_data_value)
  File "/home/localuser/Datacube/data_cube_ui/utils/data_cube_utilities/dc_water_classifier.py", line 294, in wofs_classify
    classified_clean[clean_mask] = classified[clean_mask]  # Contains data for clear pixels
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
[2019-12-20 01:14:01,588: INFO/ForkPoolWorker-7] Task water_detection.recombine_geographic_chunks[e4170071-931b-401c-be6b-3afc06e3fc8b] succeeded in 0.005194195080548525s: None

Does anyone give me any advice how to make something with this error or provide working samples or settings for L8/AWS-S3?

'add satellite' page has Date_min twice

The 'add satellite' page on the Django administration site has 'Date_min' twice - the second should read 'Date_max' as this is the parameter it is setting (looking at the page source). I think it's probably just a typo but I wasn't sure where it should be changed in the code - there may be more than one reference.

Support for EO3 Product Format

When using an ODC Product definition in EO format like this:
metadata_type: eo
metadata:
platform:
code: LANDSAT8
instrument:
name: OLI
product_type: level2
format:
name: GeoTIFF
The Web-UI Dataset Types table populates the 'Platform', 'Instrument' & 'Product Type' columns correctly.
However, when using an EO3 format like this:
metadata_type: eo3
metadata:
product:
name: landsat8
properties:
eo:platform: LANDSAT8
eo:instrument: OLI
odc:product_family: level2
odc:file_format: GeoTIFF
The above mentioned columns are empty and the subsequent 'View Definitions' page shows errors for those columns..
Is our Product definition format wrong or is it just Web-UI doesn't support EO3 yet?

Not getting UI on apache localhost

I have done the full Data_cube UI installation.
but UI page has not been opened on localhost only Ubuntu default page is there.
And for localhost/admin shows NOT FOUND.

datacube_version:1.6.1
Is there something else with localhost on address bar ???

There are no acquistions for this parameter set.

Trying to use the tools from data_cube_ui django app, added the area for the under the admin page, but always receive "There are no acquisitions for this parameter set" message. Tried mosaic and cloud coverage, the datacube consists of 2 landsat 8 images that are a week apart over the same area. I'm very certain that the params have the correct extents. Thanks!

The docker/.env file is not available

Hi, I have followed the installation process and got stuck when I did not see the folder and file docker/.env. I tried to create this but I don't know what other variable am I missing. Any solution to this?

make create-odc-network

hello when i try this command "make create-odc-network" i have this error message : make: *** no rule to make the target « create-odc-network ». Stop.
Can u help me please to solve this problem and thanks.

Log file timezone

Hi - I'm trying to diagnose an issue with my analysis tasks and it's hampered slightly due to the processing task log files not being in my local timezone (or UTC) - is this coming from celery or django, and do you know how best to change it?

Issue installing notebooks

I am running into problems installing these notebooks. I have followed the instructions here:
https://github.com/ceos-seo/data_cube_notebooks/blob/master/docs/notebook_install.md
however when I run the command make dev-up I get an error stated below. I wonder anyone can kindly help me here?

abbas@abbas-ThinkPad-T490:~/myProjects/data_cube_notebooks$ sudo make dev-up 
(export UID=; docker-compose --project-directory build/docker/dev -f build/docker/dev/docker-compose.yml up -d --build) 
/bin/bash: UID: readonly variable 
WARNING: The AWS_ACCESS_KEY_ID variable is not set. Defaulting to a blank string. 
WARNING: The AWS_SECRET_ACCESS_KEY variable is not set. Defaulting to a blank string. 
Building jupyter 
Sending build context to Docker daemon 103.9MB 
Step 1/56 : FROM jcrattzama/cube-in-a-box:odc1.8.3 
 ---> 2ca88e7b5810 
Step 2/56 : ARG BUILD_DIR=/build 
 ---> Using cache 
 ---> f5531de1b0bf 
Step 3/56 : ENV BUILD_DIR=${BUILD_DIR} 
 ---> Using cache 
 ---> 51b41b939d07 
Step 4/56 : WORKDIR ${BUILD_DIR} 
 ---> Using cache 
 ---> 979cf47ebd7e 
Step 5/56 : USER root 
 ---> Using cache 
 ---> e27cf2f62bb4 
Step 6/56 : RUN echo "jovyan ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers 
 ---> Using cache 
 ---> 61fbac0e2188 
Step 7/56 : ARG UID 
 ---> Using cache 
 ---> caef9d05f914 
Step 8/56 : ENV UID=${UID} 
 ---> Using cache 
 ---> 97a9ac72a35b 
Step 9/56 : RUN usermod -u ${UID} jovyan 
 ---> Running in b129096808ac 
usermod: UID '0' already exists 
The command '/bin/sh -c usermod -u ${UID} jovyan' returned a non-zero code: 4                                                                                                                                                                                                        
ERROR: Service 'jupyter' failed to build : Build failed 
make: *** [Makefile:19: dev-up] Error

prepare dataset error

I downloaded the LE071950542015121201T1-SC20170427222707 scene linked from http://ec2-52-201-154-0.compute-1.amazonaws.com/datacube/data/LE071950542015121201T1-SC20170427222707.tar.gz

I believe this is a collection 1 dataset. I added the ls7_collections_sr_scene yaml file to the dataset types in the datacube as per the instructions. I then ran the usgs_ls_ard_prepare.py script:

'Collection 1 Higher level data uses usgs_ls_ard_prepare.py'

and it comes up with the error:

File "usgs_ls_ard_prepare.py", line 105, in valid_region
geom = shapely.affinity.affine_transform(geom, (transform.a, transform.b, transform.d, transform.e, transform.xoff,
AttributeError: 'list' object has no attribute 'a'

Someone on Slack suggested this was because the 'usgs_ls_ard_prepare.py' script is only for pre collection data. Is this right? Or do I have some other bug?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.