Git Product home page Git Product logo

unidata / netcdf-java Goto Github PK

View Code? Open in Web Editor NEW
137.0 21.0 67.0 1.02 GB

The Unidata netcdf-java library

Home Page: https://docs.unidata.ucar.edu/netcdf-java/current/userguide/index.html

License: BSD 3-Clause "New" or "Revised" License

Java 81.15% HTML 13.98% Groovy 0.20% C 1.51% Shell 0.07% Python 0.08% CSS 0.05% PowerShell 0.01% Tcl 0.02% Roff 0.13% Makefile 0.01% Yacc 0.09% C++ 0.04% AGS Script 2.65% Perl 0.03% Vim Script 0.01% CMake 0.01% Dockerfile 0.01%
netcdf-java geoscience geodata netcdf grib cdm ncml thredds-catalogs unidata netcdf-markup-language

netcdf-java's Introduction

netcdf-java icon

netCDF-Java/CDM

The netCDF Java library provides an interface for scientific data access. It can be used to read scientific data from a variety of file formats including netCDF, HDF, GRIB, BUFR, and many others. By itself, the netCDF-Java library can only write netCDF-3 files. It can write netCDF-4 files by using JNI to call the netCDF-C library. It also implements Unidata's Common Data Model (CDM) to provide data geolocation capabilities.

For more information about netCDF-Java/CDM, see the netCDF-Java web page at

and the CDM web page at

https://docs.unidata.ucar.edu/netcdf-java/current/userguide/common_data_model_overview.html

You can obtain a copy of the latest released version of netCDF-Java software from

More documentation can be found at

A mailing list, [email protected], exists for discussion of all things netCDF-Java/CDM including announcements about netCDF-Java/CDM bugs, fixes, enhancements, and releases. For information about how to subscribe, see the "Subscribe" link on this page

For more general netCDF discussion, see the [email protected] email list.

We appreciate feedback from users of this package. Please send comments and suggestions to [email protected]. For bug reports, feel free to open an issue on this repository, or contact us at the address above. Please identify the version of the package as well as the version/vendor of Java you are using. For potential security issues, please contact [email protected] directly.

Contributors

Are you looking to contribute to the netCDF-Java efforts? That's great! Please see our contributors guide for more information!

NetCDF Markup Language (NcML)

NcML is an XML representation of netCDF metadata, it approximates the header information one gets from a netCDF file with the "ncdump -h" command. NcML is similar to the netCDF CDL (network Common data form Description Language), except, of course, it uses XML syntax.

Beyond simply describing a netCDF file, it can also be used to describe changes to existing netCDF files. A limited number of tools, mainly netCDF-Java based tools, support these features of NcML.

For more information about NcML, see the NcML web page at

https://docs.unidata.ucar.edu/netcdf-java/current/userguide/ncml_overview.html

THREDDS Catalogs

THREDDS Catalogs can be thought of as representing logical directories of on-line data resources. They are encoded as XML and provide a place for annotations and other metadata about the data resources. While the THREDDS Data Server (TDS) generates THREDDS Catalogs, THREDDS Catalogs are not limited to those produced by the TDS. These XML documents are how THREDDS-enabled data consumers find out what data is available from data providers.

THREDDS Catalog documentation (including the specification) is available at

Licensing

netCDF-Java is released under the BSD-3 licence, which can be found here.

Furthermore, this project includes code from third-party open-source software components:

  • ERDDAP: for details, see waterml/README.md
  • JUnit: for details, see cdm-test-utils/README.md
  • Edal (The University of Reading): The CDM calendars are implemented using classes from Jon Blower's uk.ac.rdg.resc.edal.time package.

Each of these software components have their own license. Please see third-party-licenses/.

How to use

The latest released and snapshot software artifacts (e.g. .jar files) are available from Unidata's Nexus repository:

To build netCDF-java from this repository, follow this tutorial.

To use the netCDF-Java library as a dependency using maven or gradle, follow these instructions.

Previous releases

Prior to v5.0.0, the netCDF-Java/CDM library and the THREDDS Data Server (TDS) have been built and released together. Starting with version 5, these two packages have been decoupled, allowing new features or bug fixes to be implemented in each package separately, and released independently. Releases prior to v5.0.0 were managed at https://github.com/unidata/thredds, which holds the combined code based used by v4.6 and earlier.

netcdf-java's People

Contributors

barronh avatar bencaradocdavies avatar cofinoa avatar cssjessica avatar cwardgar avatar danfruehauf avatar ddirks avatar dennisheimbigner avatar donmurray avatar dopplershift avatar ennawilson avatar ethanrd avatar haileyajohnson avatar hvandam2 avatar irpfander avatar jlcaron avatar johnlcaron avatar julienchastang avatar lesserwhirls avatar luca009 avatar madry avatar mhermida avatar michaeldiener avatar mnlerman avatar oxelson avatar rkambic avatar rschmunk avatar tdrwenski avatar weathergod avatar yuanho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

netcdf-java's Issues

Issue regarding the use of "dods:" in DODSNetcdfFile

TL;DR;

Making an HTTP GET request to http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time works and https://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:1] works, but http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:1] fails with a 403 (request too big).

Perhaps a server side issue, but netCDF-Java could be able to make things work by doing the right thing in terms of using the proper protocol (https in this case).

Details

In ucar.nc2.dods.DODSNetcdfFile, any dataset url that starts with dods: is changed to use http:

// canonicalize name
String urlName = datasetURL; // actual URL uses http:
this.location = datasetURL; // canonical name uses "dods:"
if (datasetURL.startsWith("dods:")) {
urlName = "http:" + datasetURL.substring(5);
} else if (datasetURL.startsWith("http:")) {
this.location = "dods:" + datasetURL.substring(5);
} else if (datasetURL.startsWith("https:")) {
this.location = "dods:" + datasetURL.substring(6);
} else if (datasetURL.startsWith("file:")) {
this.location = datasetURL;
} else {
throw new java.net.MalformedURLException(datasetURL + " must start with dods: or http: or file:");
}

Of course, that's not always the correct thing to do, but if redirects are handled properly, and the server responds properly, it should all just work. For certain code paths, everything does work. For example, if we look at the following dataset url:

dods://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml

We can open the file using NetcdfDataset.acquireFile(), and we can successfully read the dds and das because redirects work and the server behaves well. However, if we try to open with NetcdfDataset.openDataset(), we fail because the OPeNDAP server returns a 403 when reading a slice (in this case, trying to get http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:108082]). It's the "reading a slice" part that seems to be the key.

Doing a GET request on http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time works, but once I introduce the constraint, I run into problems. For example, if I try to HTTP Get http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:1], I get:

Status = 403 HTTP/1.1 403 Forbidden
Status Line = HTTP/1.1 403 Forbidden
Response Headers = 
  Date: Thu, 05 Dec 2019 19:45:27 GMT
  Server: Apache-Coyote/1.1
  Strict-Transport-Security: max-age=31536000
  XDODS-Server: opendap/3.7
  Content-Description: dods-error
  Content-Type: text/plain
  Access-Control-Allow-Origin: *
  Access-Control-Allow-Headers: X-Requested-With, Content-Type
  Connection: close
  Transfer-Encoding: chunked

ResponseBody---------------
Error {
    code = 403;
    message = "Request too big=1.1117421067232E7 Mbytes, max=500.0";
};

If I change the same request to use https:, it works. It's almost like the the entire query (after the ?) is being dropped after a redirect when requesting a slice of data from a variable.

This behavior is also seen in the latest netCDF-Java 4.6.x code (current master branch over at https://github.com/unidata/thredds). The ability to handle dods: as a dataset url through NetcdfDataset used to work, at least as recently as 4.6.12-SNAPSHOT (from February of this year), so it's a somewhat recent change affecting both 4.6.x and 5.0.x.

It seems to me that, regardless if this is a server side issue or not (likely is), netCDF-Java could handle this by making the right choice when trying to map dods: in DODSNetcdfFile.

cdmremote fails on https: URL

works:
http://thredds.ucar.edu/thredds/cdmremote/grib/NCEP/GFS/Pacific_20km/GFS_Pacific_20km_20190918_1200.grib2

fails:
https://thredds.ucar.edu/thredds/cdmremote/grib/NCEP/GFS/Pacific_20km/GFS_Pacific_20km_20190918_1200.grib2

error message:
ucar.httpservices.HTTPException: java.net.URISyntaxException: Expected scheme-specific part at index 5: HTTP:
at ucar.httpservices.HTTPAuthUtil.authscopeToURI(HTTPAuthUtil.java:112)
at ucar.httpservices.HTTPSession.init(HTTPSession.java:811)
at ucar.httpservices.HTTPSession.(HTTPSession.java:797)
at ucar.httpservices.HTTPFactory.newSession(HTTPFactory.java:38)
at ucar.nc2.stream.CdmRemote.(CdmRemote.java:79)
at ucar.nc2.stream.CdmRemoteNetcdfFileProvider.open(CdmRemoteNetcdfFileProvider.java:19)
at ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile(NetcdfDataset.java:712)
at ucar.nc2.dataset.NetcdfDataset.openDataset(NetcdfDataset.java:430)
at ucar.nc2.dataset.NetcdfDataset.acquireDataset(NetcdfDataset.java:576)
at ucar.nc2.dataset.NetcdfDataset.acquireDataset(NetcdfDataset.java:536)
at ucar.nc2.ui.ToolsUI.openFile(ToolsUI.java:1271)
at ucar.nc2.ui.op.DatasetViewerPanel.process(DatasetViewerPanel.java:98)
at ucar.nc2.ui.OpPanel.doit(OpPanel.java:173)
at ucar.nc2.ui.OpPanel.lambda$new$0(OpPanel.java:83)
at javax.swing.JComboBox.fireActionEvent(JComboBox.java:1258)
at ucar.ui.prefs.ComboBox.fireActionEvent(ComboBox.java:160)
at javax.swing.JComboBox.setSelectedItem(JComboBox.java:586)
at javax.swing.plaf.basic.BasicComboBoxUI$Handler.actionPerformed(BasicComboBoxUI.java:1943)
at javax.swing.JTextField.fireActionPerformed(JTextField.java:508)
at javax.swing.JTextField.postActionEvent(JTextField.java:721)
at javax.swing.JTextField$NotifyAction.actionPerformed(JTextField.java:836)
at javax.swing.SwingUtilities.notifyAction(SwingUtilities.java:1668)
at javax.swing.JComponent.processKeyBinding(JComponent.java:2882)
at javax.swing.JComponent.processKeyBindings(JComponent.java:2929)
at javax.swing.JComponent.processKeyEvent(JComponent.java:2845)
at java.awt.Component.processEvent(Component.java:6316)
at java.awt.Container.processEvent(Container.java:2239)
at java.awt.Component.dispatchEventImpl(Component.java:4889)
at java.awt.Container.dispatchEventImpl(Container.java:2297)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.KeyboardFocusManager.redispatchEvent(KeyboardFocusManager.java:1954)
at java.awt.DefaultKeyboardFocusManager.dispatchKeyEvent(DefaultKeyboardFocusManager.java:835)
at java.awt.DefaultKeyboardFocusManager.preDispatchKeyEvent(DefaultKeyboardFocusManager.java:1103)
at java.awt.DefaultKeyboardFocusManager.typeAheadAssertions(DefaultKeyboardFocusManager.java:974)
at java.awt.DefaultKeyboardFocusManager.dispatchEvent(DefaultKeyboardFocusManager.java:800)
at java.awt.Component.dispatchEventImpl(Component.java:4760)
at java.awt.Container.dispatchEventImpl(Container.java:2297)
at java.awt.Window.dispatchEventImpl(Window.java:2746)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:760)
at java.awt.EventQueue.access$500(EventQueue.java:97)
at java.awt.EventQueue$3.run(EventQueue.java:709)
at java.awt.EventQueue$3.run(EventQueue.java:703)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:84)
at java.awt.EventQueue$4.run(EventQueue.java:733)
at java.awt.EventQueue$4.run(EventQueue.java:731)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:730)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)

HDF5 not handling enum types correctly

The following line in H5header has been commented out (without further explanation):

    if (dt.isEnum()) {
      Group ncGroup = v.getParentGroup();
      EnumTypedef enumTypedef = ncGroup.findEnumeration(mdt.enumTypeName);
      if (enumTypedef == null) { // if shared object, wont have a name, shared version gets added later
        enumTypedef = new EnumTypedef(mdt.enumTypeName, mdt.map);
        // LOOK ncGroup.addEnumeration(enumTypedef);
      }
      v.setEnumTypedef(enumTypedef);
    }

This means that the typedef is not added to the group.

Been there at least since thredds repo branch 4.x.

Will try to fix it in this repo, branch 5.3

Nexrad bz2 compressed files failing to uncompress

To report a non-security related issue, please provide:

  • the version of the software with which you are encountering an issue
  • environmental information (i.e. Operating System, compiler info, java version, python version, etc.)
  • a description of the issue with the steps needed to reproduce it

If you have a general question about the software, please view our Suggested Support Process.

Problems with CF "Simple Geometry"

cdm/core/src/test/data/dataset/SimpleGeos/hru_soil_moist_vlen_3hru_5timestep.nc. (also in outflow_3seg_5timesteps_vlen.nc)

This is a netcdf-4 file with a variable length dimension, eg:
double catchments_x(hruid=3, *);
:axis = "X";

Open enhanced dataset so coordinate systems are added. Then try to read "catchments_x" coordinate, you get:

java.lang.ClassCastException: ucar.ma2.ArrayDouble$D1 cannot be cast to java.lang.Number

at ucar.nc2.dataset.EnhanceScaleMissingUnsignedImpl.convert(EnhanceScaleMissingUnsignedImpl.java:600)
at ucar.nc2.dataset.VariableDS.convert(VariableDS.java:246)
at ucar.nc2.dataset.VariableDS.convert(VariableDS.java:237)
at ucar.nc2.dataset.VariableDS._read(VariableDS.java:413)
at ucar.nc2.Variable.read(Variable.java:609)
at ucar.nc2.dataset.VariableDS.reallyRead(VariableDS.java:422)
at ucar.nc2.dataset.VariableDS._read(VariableDS.java:411)
at ucar.nc2.Variable.read(Variable.java:609)
at ucar.nc2.util.CompareNetcdf2.compareVariableData(CompareNetcdf2.java:508)
at ucar.nc2.util.CompareNetcdf2.compareVariables(CompareNetcdf2.java:296)
at ucar.nc2.util.CompareNetcdf2.compareVariable(CompareNetcdf2.java:268)
at ucar.nc2.util.CompareNetcdf2.compareCoordinateAxis(CompareNetcdf2.java:373)
at ucar.nc2.util.CompareNetcdf2.compareCoordinateSystem(CompareNetcdf2.java:354)
at ucar.nc2.util.CompareNetcdf2.compareVariables(CompareNetcdf2.java:336)
at ucar.nc2.util.CompareNetcdf2.compareGroups(CompareNetcdf2.java:241)
at ucar.nc2.util.CompareNetcdf2.compare(CompareNetcdf2.java:145)
at 

This happens at 5.0, would be interesting to know if it happens in 4.x.

Im guessing coordsys logic never tried to deal with a variable length coordinate ?

Support GRIB GDT 140 and PDT 73

A user sent me a GRIB2 file of ECMWF flood data that netCDF-Java will not open.

The first problem I encountered in trying to figure out why e.g. Panoply and IDV would not open the file is that it specified Grid Definition 140, which is the Lambert Azimuthal Equal Area projection. Some Googling indicates that this projection was first proposed for addition to GRIB in 2012, which I expect is after most/all of the grids that NJ understands were coded.

After an attempt at hacking Grib2Gds to accept template 140, I then ran into the problem that the file uses Product Definition 73, which is missing from Grib2Pds.

Perhaps there are further problems, but that was where I quit.

Enable the new Slow test category where appropriate

PR Undiata/netcdf-java#57 added a new test category: ucar.unidata.util.test.category.Slow. Currently, tests annotated with that category will always be ignored. At a minimum, we should still run these tests on Jenkins. The bigger question, however, is if we want to be able to turn these on locally with ease (say, with a java option?). More info at #57 (comment).

Issue with WMS loading >2GB variables

I am attempting to load a large dataset (4200x4100) with 21 timesteps into WMS using THREDDS. When I do so, it fails and the page returns...

<ServiceExceptionReport xmlns="http://www.opengis.net/ogc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.3.0" xsi:schemaLocation="http://www.opengis.net/ogc http://schemas.opengis.net/wms/1.3.0/exceptions_1_3_0.xsd">
<ServiceException>
Unexpected error of type java.lang.IllegalStateException
</ServiceException>
<StackTrace>
<![CDATA[
uk.ac.rdg.resc.edal.cdm.LookUpTable.<init>(LookUpTable.java:109)uk.ac.rdg.resc.edal.cdm.LookUpTableGrid.generate(LookUpTableGrid.java:93)uk.ac.rdg.resc.edal.cdm.CdmUtils.createHorizontalGrid(CdmUtils.java:279)uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata(CdmUtils.java:174)uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata(CdmUtils.java:127)thredds.server.wms.ThreddsDataset.<init>(ThreddsDataset.java:95)thredds.server.wms.ThreddsDataset.getThreddsDatasetForRequest(ThreddsDataset.java:270)thredds.server.wms.ThreddsWmsController.dispatchWmsRequest(ThreddsWmsController.java:165)uk.ac.rdg.resc.ncwms.controller.AbstractWmsController.handleRequestInternal(AbstractWmsController.java:207)org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:174)org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:50)org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)javax.servlet.http.HttpServlet.service(HttpServlet.java:634)org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)javax.servlet.http.HttpServlet.service(HttpServlet.java:741)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.servlet.filter.RequestQueryFilter.doFilter(RequestQueryFilter.java:118)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.servlet.filter.RequestCORSFilter.doFilterInternal(RequestCORSFilter.java:49)org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347)org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.servlet.filter.RequestPathFilter.doFilter(RequestPathFilter.java:94)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.server.RequestBracketingLogMessageFilter.doFilter(RequestBracketingLogMessageFilter.java:81)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:660)org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)java.lang.Thread.run(Thread.java:748)
]]>
</StackTrace>
</ServiceExceptionReport>

and threddsServlet.log returns...

2019-07-30T01:56:09.069 +0000 [ 30415][ 5] ERROR - thredds.server.wms.ThreddsWmsController - dispatchWmsRequest(): Exception: java.lang.IllegalStateException: nLon (=0) and nLat (=2147483647) must be positive and > 0

I notice that 2147483647 is 2^31 (signed 32-bit number). Does WMS handle arrays as 32-bit in some locations that's causing us to run into this limit? Note that OPeNDAP handles this dataset without issue and if I subset the dataset with a stride of 4 (1050x1025), WMS is also able to run without issue.

I have put the datasets on AWS for inspection.
OPeNDAP endpoint... http://54.158.195.139:8080/thredds/dodsC/nwa/v1/NWA_v1_best.ncd.html

WMS endpoint...
http://54.158.195.139:8080/thredds/wms/nwa/v1/NWA_v1_best.ncd?service=WMS&version=1.3.0&request=GetCapabilities

Thanks!

-Joe

GridCoverage failing (probably) on MRUTC

TestCoordinatesMatchGbx.readGrib1Files() fails on:

/usr/local/google/home/jlcaron/thredds/cdmUnitTest/formats/grib1/QPE.20101005.009.157 Total_precipitation_surface_Accumulation
expected: 2010-10-05T18:00:00Z
but was : 2010-10-05T12:00:00Z
at ucar.nc2.grib.GribCoordsMatchGbx.readAndTestGrib1(GribCoordsMatchGbx.java:389)
at ucar.nc2.grib.GribCoordsMatchGbx.readCoverageData(GribCoordsMatchGbx.java:179)
at ucar.nc2.grib.GribCoordsMatchGbx.readCoverage(GribCoordsMatchGbx.java:147)
at ucar.nc2.grib.GribCoordsMatchGbx.readCoverageDataset(GribCoordsMatchGbx.java:108)
at ucar.nc2.grib.TestCoordinatesMatchGbx$GribAct.doAct(TestCoordinatesMatchGbx.java:194)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:263)
at ucar.nc2.grib.TestCoordinatesMatchGbx.readAllDir(TestCoordinatesMatchGbx.java:169)
at ucar.nc2.grib.TestCoordinatesMatchGbx.readGrib1Files(TestCoordinatesMatchGbx.java:53)

Open questions related to udunits package

Open questions to address or get into milestone related to the udunits module:

  • Do we support same grammar as udunits-2
    • Bison grammar - we have javacc .jj grammar files, udunits-2 C uses bison .y grammar files
  • Do we support same database of units? Whatโ€™s different?
    • Good place to start - test against this

GribCollection fails in general case of CoordinateTime2D

This issue has been there forever, but the recent change of the default "ignore zero intervals" (was true, now false), has exposed the problem in one or more of our test datasets.

Reproduce by creating an ncx4 from cdmUnitTest/datasets/NDFD-CONUS-5km/.*grib2.

eg put the above expression in ToosUI, IOSP/Grib2/Grib2Collection, then choose (rightmost icon) "Write Index", you will get:

GribCoverageDataset.open failed
java.lang.IllegalStateException: Time2D with type= MRC
at ucar.nc2.grib.coverage.GribCoverageDataset.makeTime2DCoordinates(GribCoverageDataset.java:512)
at ucar.nc2.grib.coverage.GribCoverageDataset.createCoverageCollection(GribCoverageDataset.java:180)
at ucar.nc2.grib.coverage.GribCoverageDataset.open(GribCoverageDataset.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at ucar.nc2.ft2.coverage.CoverageDatasetFactory.openGrib(CoverageDatasetFactory.java:113)
at ucar.nc2.ft2.coverage.CoverageDatasetFactory.openCoverageDataset(CoverageDatasetFactory.java:61)

For NDFD thredds serving, can put the ""ignore zero intervals" back to true. But the general case should be fixed somehow.

NEXRAD Message 31 Changes

NEXRAD will be beta testing some Message 31 adjustments this spring as part of their normal beta testing of RPG/RDA build 19.0. These data are currently available from the FOP1 testbed, and I've attached a sample file that contains the format adjustments. This file does not completely parse with the current code:

  1. ZDR now contains 11-bit data, which is stored as shorts in the file rather than just bytes. This will break the assumptions made here:

Variable v = new Variable(ncfile, null, null, shortName);
if (datatype == DIFF_PHASE) {
v.setDataType(DataType.USHORT);
} else {
v.setDataType(DataType.UBYTE);
}

and

int dataCount = getGateCount(datatype);
if (datatype == DIFF_PHASE) {
short[] data = new short[dataCount];
raf.readShort(data, 0, dataCount);
for (int gateIdx : gateRange) {
if (gateIdx >= dataCount)
ii.setShortNext(MISSING_DATA);
else
ii.setShortNext(data[gateIdx]);
}
} else {
byte[] data = new byte[dataCount];
raf.readFully(data);
// short [] ds = convertunsignedByte2Short(data);
for (int gateIdx : gateRange) {
if (gateIdx >= dataCount)
ii.setByteNext(MISSING_DATA);
else
ii.setByteNext(data[gateIdx]);
}

Really, this code should not be hard-coded based on the moment and instead be looking at the data word bits (8 or 16) encoded in the data file.

  1. There is a new moment CFP (Clutter Filter Power Removed). There's a lot of code that's based around an explicit set of options. This sample below is representative:
    dbp1 = din.readInt();
    dbp2 = din.readInt();
    dbp3 = din.readInt();
    dbp4 = din.readInt();
    dbp5 = din.readInt();
    dbp6 = din.readInt();
    dbp7 = din.readInt();
    dbp8 = din.readInt();
    dbp9 = din.readInt();
    vcp = getDataBlockValue(din, (short) dbp1, 40);
    int dbpp4 = 0;
    int dbpp5 = 0;
    int dbpp6 = 0;
    int dbpp7 = 0;
    int dbpp8 = 0;
    int dbpp9 = 0;
    if (dbp4 > 0) {
    String tname = getDataBlockStringValue(din, (short) dbp4, 1, 3);
    if (tname.startsWith("REF")) {
    hasHighResREFData = true;
    dbpp4 = dbp4;
    } else if (tname.startsWith("VEL")) {
    hasHighResVELData = true;
    dbpp5 = dbp4;
    } else if (tname.startsWith("SW")) {
    hasHighResSWData = true;
    dbpp6 = dbp4;
    } else if (tname.startsWith("ZDR")) {
    hasHighResZDRData = true;
    dbpp7 = dbp4;
    } else if (tname.startsWith("PHI")) {
    hasHighResPHIData = true;
    dbpp8 = dbp4;
    } else if (tname.startsWith("RHO")) {
    hasHighResRHOData = true;
    dbpp9 = dbp4;
    } else {
    logger.warn("Missing radial product dbp4={} tname={}", dbp4, tname);
    }
    }
    if (dbp5 > 0) {
    String tname = getDataBlockStringValue(din, (short) dbp5, 1, 3);
    if (tname.startsWith("REF")) {
    hasHighResREFData = true;
    dbpp4 = dbp5;
    } else if (tname.startsWith("VEL")) {
    hasHighResVELData = true;
    dbpp5 = dbp5;
    } else if (tname.startsWith("SW")) {
    hasHighResSWData = true;
    dbpp6 = dbp5;
    } else if (tname.startsWith("ZDR")) {
    hasHighResZDRData = true;
    dbpp7 = dbp5;
    } else if (tname.startsWith("PHI")) {
    hasHighResPHIData = true;
    dbpp8 = dbp5;
    } else if (tname.startsWith("RHO")) {
    hasHighResRHOData = true;
    dbpp9 = dbp5;
    } else {
    logger.warn("Missing radial product dbp5={} tname={}", dbp5, tname);
    }
    }
    if (dbp6 > 0) {
    String tname = getDataBlockStringValue(din, (short) dbp6, 1, 3);
    if (tname.startsWith("REF")) {
    hasHighResREFData = true;
    dbpp4 = dbp6;
    } else if (tname.startsWith("VEL")) {
    hasHighResVELData = true;
    dbpp5 = dbp6;
    } else if (tname.startsWith("SW")) {
    hasHighResSWData = true;
    dbpp6 = dbp6;
    } else if (tname.startsWith("ZDR")) {
    hasHighResZDRData = true;
    dbpp7 = dbp6;
    } else if (tname.startsWith("PHI")) {
    hasHighResPHIData = true;
    dbpp8 = dbp6;
    } else if (tname.startsWith("RHO")) {
    hasHighResRHOData = true;
    dbpp9 = dbp6;
    } else {
    logger.warn("Missing radial product dbp6={} tname={}", dbp6, tname);
    }
    }
    if (dbp7 > 0) {
    String tname = getDataBlockStringValue(din, (short) dbp7, 1, 3);
    if (tname.startsWith("REF")) {
    hasHighResREFData = true;
    dbpp4 = dbp7;
    } else if (tname.startsWith("VEL")) {
    hasHighResVELData = true;
    dbpp5 = dbp7;
    } else if (tname.startsWith("SW")) {
    hasHighResSWData = true;
    dbpp6 = dbp7;
    } else if (tname.startsWith("ZDR")) {
    hasHighResZDRData = true;
    dbpp7 = dbp7;
    } else if (tname.startsWith("PHI")) {
    hasHighResPHIData = true;
    dbpp8 = dbp7;
    } else if (tname.startsWith("RHO")) {
    hasHighResRHOData = true;
    dbpp9 = dbp7;
    } else {
    logger.warn("Missing radial product dbp7={} tname={}", dbp7, tname);
    }
    }
    if (dbp8 > 0) {
    String tname = getDataBlockStringValue(din, (short) dbp8, 1, 3);
    if (tname.startsWith("REF")) {
    hasHighResREFData = true;
    dbpp4 = dbp8;
    } else if (tname.startsWith("VEL")) {
    hasHighResVELData = true;
    dbpp5 = dbp8;
    } else if (tname.startsWith("SW")) {
    hasHighResSWData = true;
    dbpp6 = dbp8;
    } else if (tname.startsWith("ZDR")) {
    hasHighResZDRData = true;
    dbpp7 = dbp8;
    } else if (tname.startsWith("PHI")) {
    hasHighResPHIData = true;
    dbpp8 = dbp8;
    } else if (tname.startsWith("RHO")) {
    hasHighResRHOData = true;
    dbpp9 = dbp8;
    } else {
    logger.warn("Missing radial product dbp8={} tname={}", dbp8, tname);
    }
    }
    if (dbp9 > 0) {
    String tname = getDataBlockStringValue(din, (short) dbp9, 1, 3);
    if (tname.startsWith("REF")) {
    hasHighResREFData = true;
    dbpp4 = dbp9;
    } else if (tname.startsWith("VEL")) {
    hasHighResVELData = true;
    dbpp5 = dbp9;
    } else if (tname.startsWith("SW")) {
    hasHighResSWData = true;
    dbpp6 = dbp9;
    } else if (tname.startsWith("ZDR")) {
    hasHighResZDRData = true;
    dbpp7 = dbp9;
    } else if (tname.startsWith("PHI")) {
    hasHighResPHIData = true;
    dbpp8 = dbp9;
    } else if (tname.startsWith("RHO")) {
    hasHighResRHOData = true;
    dbpp9 = dbp9;
    } else {
    logger.warn("Missing radial product dbp9={} tname={}", dbp9, tname);
    }
    }
    // hasHighResREFData = (dbp4 > 0);
    if (hasHighResREFData) {
    reflectHR_gate_count = getDataBlockValue(din, (short) dbpp4, 8);
    reflectHR_first_gate = getDataBlockValue(din, (short) dbpp4, 10);
    reflectHR_gate_size = getDataBlockValue(din, (short) dbpp4, 12);
    ref_rf_threshold = getDataBlockValue(din, (short) dbpp4, 14);
    ref_snr_threshold = getDataBlockValue(din, (short) dbpp4, 16);
    reflectHR_scale = getDataBlockValue1(din, (short) dbpp4, 20);
    reflectHR_addoffset = getDataBlockValue1(din, (short) dbpp4, 24);
    reflectHR_offset = (short) (dbpp4 + 28);
    }
    // hasHighResVELData = (dbp5 > 0);
    if (hasHighResVELData) {
    velocityHR_gate_count = getDataBlockValue(din, (short) dbpp5, 8);
    velocityHR_first_gate = getDataBlockValue(din, (short) dbpp5, 10);
    velocityHR_gate_size = getDataBlockValue(din, (short) dbpp5, 12);
    vel_rf_threshold = getDataBlockValue(din, (short) dbpp5, 14);
    vel_snr_threshold = getDataBlockValue(din, (short) dbpp5, 16);
    velocityHR_scale = getDataBlockValue1(din, (short) dbpp5, 20);
    velocityHR_addoffset = getDataBlockValue1(din, (short) dbpp5, 24);
    velocityHR_offset = (short) (dbpp5 + 28);
    }
    // hasHighResSWData = (dbp6 > 0);
    if (hasHighResSWData) {
    spectrumHR_gate_count = getDataBlockValue(din, (short) dbpp6, 8);
    spectrumHR_first_gate = getDataBlockValue(din, (short) dbpp6, 10);
    spectrumHR_gate_size = getDataBlockValue(din, (short) dbpp6, 12);
    sw_rf_threshold = getDataBlockValue(din, (short) dbpp6, 14);
    sw_snr_threshold = getDataBlockValue(din, (short) dbpp6, 16);
    spectrumHR_scale = getDataBlockValue1(din, (short) dbpp6, 20);
    spectrumHR_addoffset = getDataBlockValue1(din, (short) dbpp6, 24);
    spectrumHR_offset = (short) (dbpp6 + 28);
    }
    // hasHighResZDRData = (dbp7 > 0);
    if (hasHighResZDRData) {
    zdrHR_gate_count = getDataBlockValue(din, (short) dbpp7, 8);
    zdrHR_first_gate = getDataBlockValue(din, (short) dbpp7, 10);
    zdrHR_gate_size = getDataBlockValue(din, (short) dbpp7, 12);
    zdrHR_rf_threshold = getDataBlockValue(din, (short) dbpp7, 14);
    zdrHR_snr_threshold = getDataBlockValue(din, (short) dbpp7, 16);
    zdrHR_scale = getDataBlockValue1(din, (short) dbpp7, 20);
    zdrHR_addoffset = getDataBlockValue1(din, (short) dbpp7, 24);
    zdrHR_offset = (short) (dbpp7 + 28);
    }
    // hasHighResPHIData = (dbp8 > 0);
    if (hasHighResPHIData) {
    phiHR_gate_count = getDataBlockValue(din, (short) dbpp8, 8);
    phiHR_first_gate = getDataBlockValue(din, (short) dbpp8, 10);
    phiHR_gate_size = getDataBlockValue(din, (short) dbpp8, 12);
    phiHR_rf_threshold = getDataBlockValue(din, (short) dbpp8, 14);
    phiHR_snr_threshold = getDataBlockValue(din, (short) dbpp8, 16);
    phiHR_scale = getDataBlockValue1(din, (short) dbpp8, 20);
    phiHR_addoffset = getDataBlockValue1(din, (short) dbpp8, 24);
    phiHR_offset = (short) (dbpp8 + 28);
    }
    // hasHighResRHOData = (dbp9 > 0);
    if (hasHighResRHOData) {
    rhoHR_gate_count = getDataBlockValue(din, (short) dbpp9, 8);
    rhoHR_first_gate = getDataBlockValue(din, (short) dbpp9, 10);
    rhoHR_gate_size = getDataBlockValue(din, (short) dbpp9, 12);
    rhoHR_rf_threshold = getDataBlockValue(din, (short) dbpp9, 14);
    rhoHR_snr_threshold = getDataBlockValue(din, (short) dbpp9, 16);
    rhoHR_scale = getDataBlockValue1(din, (short) dbpp9, 20);
    rhoHR_addoffset = getDataBlockValue1(din, (short) dbpp9, 24);
    rhoHR_offset = (short) (dbpp9 + 28);
    }

I...uh...don't know where to begin.

We could probably live without CFP for (well probably forever). The change to ZDR means the data are incorrect for any site shipping the new version. This is why we use what's in the file and don't hard-code decisions unless it's absolutely necessary, boys and girls.

find a function like python version

Iโ€™m using netcdf-java,but I couldnโ€™t find the funtion like the pyphon version.For example of python version,
data = NetcdfFile.open(filePath).varibles[โ€˜tempโ€™][rows]
--rows:Array for many X,Y points,like [[1,2],[4,5]],

This function can support to find points [1,2] and [4,5],tow points values,

but I find the java function only support to find the serial arrays values,is there

a function like python version in java version?

NcMLWriter does not write integer variable values as integers

In NetCDF-Java version 5.0.0:

In method makeValuesElement(Variable, boolean) of class NcMLWriter there obviously is the intention to separate writing of floating point values and integer values (by the boolean variable isRealType), but appending to the string builder (buffer) is always made through StringBuilder.append(double) since the used ternary operator will always have a result of type double. A statement like if (isRealType) buff.append(iter.getDoubleNext()); else buff.append(iter.getIntNext()); should be used instead.

Split visad module

Related to #105

Split the visad module into the following new modules:

  • mcidas/gempak
  • visad5

The dividing line here is that mcidas and gempak IOSPs depend very little on visad.jar, and can pull in the necessary code to keep them working. the vis5d IOSP depends heavily on visad.jar, so it can be split off into its own module to allow uses to pull in that dependency or not.

NetcdfFile.openInMemory(URI) leaks open input stream (which keeps files from being deleted)

NetcdfFile.openInMemory(URI) converts the URI to a URL and open an input stream from it to copy the contents to a byte array. However, the input stream is never closed, which can leak an connection. In the case that the URI is obtained from a file-based Path, this leaks a handle to the file, preventing the file from being deleted later.

I believe that this could be fixed by opening the URL stream in a try-with-resources block in NetcdfFile.java, as in the following diff. This applies on the version 5.2.0, but the master branch has the same issue in both NetcdfFile and the utility class NetcdfFiles.

$ git diff
diff --git a/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java b/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java
index b4e14bc9b7..5976d864f1 100644
--- a/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java
+++ b/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java
@@ -725,7 +725,10 @@ public class NetcdfFile implements ucar.nc2.util.cache.FileCacheable, Closeable
   @Deprecated
   public static NetcdfFile openInMemory(URI uri) throws IOException {
     URL url = uri.toURL();
-    byte[] contents = IO.readContentsToByteArray(url.openStream());
+    byte[] contents;
+    try (InputStream in = url.openStream()) {
+      contents = IO.readContentsToByteArray(in);
+    }
     return openInMemory(uri.toString(), contents);
   }

Here's a MWE to reproduce the problem:

import ucar.nc2.NetcdfFile;

import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;

public class OpenInMemoryDoesNotCloseInputStream {

  public static void main(String[] args) throws IOException {
    /*
     * Create a temp file and download a sample netCDF file. This input stream has nothing to do
     * with the problem; this is just so that there's an input file to load and that can safely be
     * deleted afterward.
     */
    Path temp = Files.createTempFile("file", ".nc");
    String spec =
        "https://www.unidata.ucar.edu/software/netcdf/examples/sresa1b_ncar_ccsm3-example.nc";
    URL url = new URL(spec);
    try (InputStream in = url.openStream()) {
      Files.copy(in, temp, StandardCopyOption.REPLACE_EXISTING);
    }

    /*
     * Read the file into a NetcdfFile. try-with-resources ensures that the NetcdfFile's close()
     * method is called, so all resources with it are released.
     */
    try (NetcdfFile file = NetcdfFile.openInMemory(temp.toUri())) {
      // do stuff with file...
    }

    /*
     * Try to delete the temp file with Files.delete(). This fails with an exception:
     * 
     * java.nio.file.FileSystemException:
     * C:\Users\\username\AppData\Local\Temp\file8726442302596323190.nc: The process cannot access
     * the file because it is being used by another process.
     */
    Files.delete(temp);
  }
}

New NEXRAD related failure

New failure in ucar.nc2.iosp.nexrad2.TestNexrad2.testRead on Jenkins (possibly related to #37).

java.io.IOException: java.nio.channels.ClosedChannelException

	at ucar.nc2.NetcdfFile.open(NetcdfFile.java:500)
	at ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile(NetcdfDataset.java:713)
	at ucar.nc2.dataset.NetcdfDataset.openFile(NetcdfDataset.java:580)
	at ucar.nc2.iosp.nexrad2.TestNexrad2$MyAct.doAct(TestNexrad2.java:56)
	at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:263)
	at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:213)
	at ucar.nc2.iosp.nexrad2.TestNexrad2.testRead(TestNexrad2.java:34)

Does not consistently bomb out on a particular file or after processing particular number of files.

Module Reorganization

With an eye towards supporting the Java Platform Module System, as well as clearly identifying a public API, this is a first pass at reorganizing our modules. For v5.2, the reorg will try to not break the API. Uber artifacts, like toolsUI.jar and netcdfAll.jar should remain the same content-wise.

Items:

Possible race condition in cache

We've been seeing a few situations on Travis (both Oracle and AdoptOpen JDKs) that smell an awful lot like a race condition somewhere related to caching. What happens on Travis is that the Travis times out with the following output:

ucar.nc2.util.cache.TestFileCacheConcurrent > testConcurrentAccess STANDARD_OUT
    TestFileCacheConcurrent
     loaded 65 files
     submit 100 queue size 50 cache: {  hits= 0 miss= 33 nfiles= 31 elems= 16
    }
     done 100
     submit 200 queue size 53 cache: {  hits= 20 miss= 127 nfiles= 127 elems= 40
    }
     done 200
     submit 300 queue size 54 cache: {  hits= 80 miss= 166 nfiles= 50 elems= 19
    }
     done 300
     submit 400 queue size 44 cache: {  hits= 113 miss= 243 nfiles= 126 elems= 38
    }
     done 400
     submit 500 queue size 47 cache: {  hits= 160 miss= 293 nfiles= 50 elems= 21
    }
     done 500
     submit 600 queue size 48 cache: {  hits= 190 miss= 362 nfiles= 119 elems= 41
    }
     done 600
     submit 700 queue size 49 cache: {  hits= 233 miss= 418 nfiles= 50 elems= 19
    }
     done 700
     submit 800 queue size 49 cache: {  hits= 250 miss= 500 nfiles= 132 elems= 39
    }
     done 800
     submit 900 queue size 49 cache: {  hits= 290 miss= 561 nfiles= 70 elems= 24
    }
     done 900
     submit 1000 queue size 15 cache: {  hits= 334 miss= 651 nfiles= 158 elems= 47
    }
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted

ucar.nc2.util.cache.TestNetcdfFileCache STANDARD_OUT
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted
     InterruptedException=sleep interrupted

ucar.nc2.util.cache.TestNetcdfFileCache > testPeriodicClear STANDARD_OUT
     InterruptedException=sleep interrupted

Unfortunately, it's not easily reproducible. It reminds me a lot of what we were seeing on Jenkins, which had the exact same symptoms and output as the ucar.nc2.util.cache.TestNetcdfFileCache > testPeriodicClear test (but not the others shown above...at least that I can remember), and stopped happening after PR #61.

Reading web netCDF data: Connection timed out when looking for https resource (but it timed out looking for http resource)

From the mailing list


When using cdm-core-5.2.0, when attempting to access a netCDF resource on a server that serves https on port 443 but does not serve http on port 80, attempting to access the https resource fails but with a message indicating it tried port 80.

See redacted stack trace below.

I wonder if it is related to the diff at v5.1.0...v5.2.0#diff-89b73930910f1f42a6af87d6d282a372 but this is speculation.

Failed to open netCDF file https://fqdn-of-server/path/to/netcdf_resource.nc
...
Caused by: ucar.httpservices.HTTPException: org.apache.http.conn.HttpHostConnectException: Connect to fqdn-of-server:80 [fqdn-of-server/x.x.x.x] failed: Connection timed out: connect
        at ucar.httpservices.HTTPMethod.executeRaw(HTTPMethod.java:373)
        at ucar.httpservices.HTTPMethod.execute(HTTPMethod.java:314)
        at ucar.unidata.io.http.HTTPRandomAccessFile.doConnect(HTTPRandomAccessFile.java:136)
        at ucar.unidata.io.http.HTTPRandomAccessFile.<init>(HTTPRandomAccessFile.java:60)
        at ucar.unidata.io.http.HTTPRandomAccessFile.<init>(HTTPRandomAccessFile.java:40)
        at ucar.nc2.NetcdfFile.getRaf(NetcdfFile.java:448)
        at ucar.nc2.NetcdfFile.open(NetcdfFile.java:338)
        at ucar.nc2.NetcdfFile.open(NetcdfFile.java:305)
        at ucar.nc2.NetcdfFile.open(NetcdfFile.java:290)
        at ucar.nc2.NetcdfFile.open(NetcdfFile.java:278)
        at wres.io.reading.nwm.NWMTimeSeries.openFile(NWMTimeSeries.java:194)
        ... 52 more
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to fqdn-of-server:80 [fqdn-of-server/x.x.x.x] failed: Connection timed out: connect
        at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156)
        at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:374)
        at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
        at ucar.httpservices.HTTPMethod.executeRaw(HTTPMethod.java:366)
        ... 62 more
Caused by: java.net.ConnectException: Connection timed out: connect
        at java.base/java.net.PlainSocketImpl.waitForConnect(Native Method)
        at java.base/java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:107)
        at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
        at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
        at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
        at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
        at java.base/java.net.Socket.connect(Socket.java:591)
        at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75)
        at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
        ... 72 more

Creating Groups with NetcdfFileWriter.addGroup not working

I use netcdf-java 5.2.0 on Windows with JDK 1.8.0. The file is Netcdf 4 format (HDF style)

It fails when I try to create a netcdf group with :

NetcdfFileWriter n = NetcdfFileWriter.openExisting(filePath);
n.setRedefineMode(true);
Group rootGroup = n.addGroup(null, "");
n.addGroup(rootGroup, "test");
n.setRedefineMode(false);

I get :

java.lang.NullPointerException
	at ucar.nc2.jni.netcdf.Nc4Iosp.updateDimensions(Nc4Iosp.java:445)
	at ucar.nc2.jni.netcdf.Nc4Iosp.updateDimensions(Nc4Iosp.java:496)
	at ucar.nc2.jni.netcdf.Nc4Iosp.flush(Nc4Iosp.java:3502)
	at ucar.nc2.NetcdfFileWriter.rewrite(NetcdfFileWriter.java:938)
	at ucar.nc2.NetcdfFileWriter.setRedefineMode(NetcdfFileWriter.java:928)
....

Cannot open HDF-EOS swath file with unlimited dimension

A user reported that Panoply could not open an HDF-EOS file and there was a "Conflicting Dimensions" exception message. The same occurs using toolsUI and IDV. Although the copy of Panoply was using netCDF-Java 5.1, I encountered the same problem when investigating using 5.3-SNAPSHOT.

The exception is thrown at line 262 of ucar.nc2.iosp.hdf4.HdfEos.After adding a lot of additional logging msgs, I eventually found that what was going on is that the particular swath file has metadata stating that the time dimension nTime has length 1, like so:

	GROUP=SWATH_1
		SwathName="MOP02"
		GROUP=Dimension
			OBJECT=Dimension_1
				DimensionName="Unlim"
				Size=-1
			END_OBJECT=Dimension_1
			OBJECT=Dimension_2
				DimensionName="nTime"
				Size=1
			END_OBJECT=Dimension_2

However, the time dimension actually has 221967 steps, and the various variables that use that dimension specify a MaxdimList of unlimited, e.g.,

			OBJECT=DataField_2
				DataFieldName="SolarZenithAngle"
				DataType=H5T_NATIVE_FLOAT
				DimList=("nTime")
				MaxdimList=("Unlim")
			END_OBJECT=DataField_2

So it seems that on trying to acquire the dataset, the netCDF-Java library is figuring out that there are 221967 timesteps in the example file, but when it circles around and does some testing to verify that this is an HDF-EOS file, it's running into that bit of metadata that says the nTime dimension has size 1. And thus the conflicting dimensions exception gets thrown.

I commented out that exception throw, and was then able to open the file and plot from the data within with problem. However, I don't know that that's the best solution for this problem. ๐Ÿ™„

For an example dataset, I ran my tests using ftp://l5ftl01.larc.nasa.gov/pub/MOPITT/MOP02J.008/2017.08.17/MOP02J-20170807-L2V18.0.3.he5

CF-Radial not recognized as radial feature type.

Add a Convention parser that handles CF-Radial Convention.
Currently there is none and those files are not recognized as radial data types.

The test files are in ~cdmUnitTest/conventions/cfradial. Are these current or are there more recent examples? Has the spec evolved? Is it being used?

To do this we need a medium/long range strategy for radial feature types. What should the API look like? Is there feedback from MetPy/Siphon and the radar experts?

Deprecate ucar.nc2.dt

Deprecate classes in ucar.nc2.dt.

  • ucar.nc2.dt.grid deprecated in favor of ucar.ft2.coverage
  • ucar.nc2.dt.radial deprecated in favor of ucar.ft.radial

Actual removal of ucar.nc2.dt will not occur until netcdf-java v7.

cdm_data_type confusion

I don't think this is actually a problem with the NetCDF Java code, but wanted to get some clarification before I ask others to update the ACDD and NODC netCDF feature template documentation.

There seems to be some incorrect info on those pages on the cdm_data_type global attribute. As I understand it this attribute is specifically for NetCDF Java, and is used as an explicit way to designate the intended FeatureType.

Specifically, the problem applies to timeSeriesProfile (although I haven't experimented with the trajectory types). According to the CF class, the appropriate cdm_data_type (FeatureType) for timeSeriesProfile is STATION_PROFILE. However, the ACDD and NODC documents and example data sets don't seem to be aware of this cdm_data_type, suggesting STATION instead.

This causes problems with NetCDF Java. FeatureTypes/PointFeature seems to work fine with timeSeriesProfile data sets using cdm_data_type STATION, but FeatureTypes/FeatureScan reports an error:

Table TopScalars/PsuedoStructure(time)/MultidimPseudo(time,z) featureType STATION_PROFILE doesnt match desired type STATION
**Failed to find FeatureDatasetFactory for= /media/store/dl/problem.nc datatype=STATION

Language from the NODC template page:

These data types do not map equally to the CF feature types. If the CF feature type = Trajectory Time Series, use "Trajectory"; if Point, Profile, or Time Series Profile, use "Station".

The example NODC timeSeriesProfile cdl uses cdm_data_type STATION and produces above error.

The ACDD wiki page lists an incorrect set of possible values for cdm_data_type:

Current values: vector, grid, textTable, tin, stereoModel, video.

It also links to [the THREDDS InvCatalogSpec page)http://www.unidata.ucar.edu/software/thredds/current/tds/catalog/InvCatalogSpec.html#dataType, which lists an incomplete set of possible data type values and doesn't include STATION_PROFILE.

I'm guessing this is just a documentation problem. If so, it seems like three changes need to happen:

  • Contact NODC to update their example data sets and suggested mapping documentation with the correct cdm_data_type mappings.
  • Contact ACDD to update their wiki with the correct list of possible values.
  • Update the THREDDS InvCatalogSpec page to list all possible values for dataType.

Does that sound right? Based on CF.FeatureType, these seem to be the correct mappings:

CF NetCDF Java
point POINT
profile PROFILE
timeSeries STATION
timeSeriesProfile STATION_PROFILE
trajectory TRAJECTORY
trajectoryProfile SECTION

For context, I found these problems after following NODC template guidelines in the netCDF encoder I wrote for the IOOS 52n SOS project. I now switched my code to [get the cdm_data_type directly from CF.FeatureType.convert[(https://github.com/ioos/i52n-sos/blob/master/coding-ioos-netcdf/src/main/java/org/n52/sos/encode/AbstractIoosNetcdfEncoder.java#L241), but want to get the documentation in wild corrected.

Presence of libjnidispatch.jnilib triggers Mac notarization failure

I am trying to finish packaging up a copy of Panoply for macOS using netCDF-Java 5.3 SNAPSHOT. Everything is great right up until almost the end when I have to get the disk image DMG notarized by Apple. That fails because of the presence of libjnidispatch.jnilib in the NJ jar, i.e., netcdfAll-5.3.0-SNAPSHOT.jar/com/sun/jna/darwin/libjnidispatch.jnilib. In short, the notarization process is complaining that the libjnidispatch.jnilib binary is not properly signed.

Note that this problem did not occur last Friday (January 31) when I was making a Mac package, but I have a dim recollection that Apple was going to be closing some loophole for notarizing Java based apps effective February 1. Or perhaps that was apps built on an an old Java, such as the Java 8 that NJ is based on.

I realize that dealing with this is pretty much outside Unidata's purview, but it's something that could completely break my ability to distribute a netCDF-Java based app to macOS users with an up-to-date operating system. But I am wondering what libjnidispatch.jnilib is used for in NJ, and whether it's really necessary?

ETA: I see that JNI is necessary in order to use the library to write NC4 files. A quick and dirty test suggests that removing the offending jnilib from the NJ jar does not cause any breakage when opening and reading datasets, but I will need to test that more throughly before relying on it.

Nexrad2IOServiceProvider.isNEXRAD2Format() failing on hi res

https://jenkins-aws.unidata.ucar.edu/job/netcdf-java/lastCompletedBuild/testReport/ucar.nc2.iosp.nexrad2/TestNexrad2HiResolution/testRead/

java.io.IOException: java.lang.NumberFormatException: For input string: "IVE2"
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:366)
at ucar.nc2.dataset.NetcdfDataset.openProtocolOrFile(NetcdfDataset.java:814)
at ucar.nc2.dataset.NetcdfDataset.openFile(NetcdfDataset.java:673)
at ucar.nc2.iosp.nexrad2.TestNexrad2HiResolution$MyAct.doAct(TestNexrad2HiResolution.java:54)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:299)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:243)
at ucar.nc2.iosp.nexrad2.TestNexrad2HiResolution.testRead(TestNexrad2HiResolution.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

appears to be from Ryan's change on 11/20 at revision:

54a554c

Possible clarification of GRIB 2D Datasets with regards to CF

The 2D GRIB collection datasets currently do something that is not recommended by CF. It's not a requirement, but it is recommended.

As an easy to see example (not a TDS issue, as the code lives in netCDF-Java in the grib module, but easier to see through the TDS), if we look at the cdl representation of the GFS 80km dataset on thredds-test (cdl shown here via cdmremote), we see things like the following:

netcdf grib/NCEP/GFS/CONUS_80km/TwoD {
  dimensions:
    reftime = 124;
    time2 = 41;

  variables:
    double reftime(reftime=124);
      :units = "Hour since 2019-10-20T00:00:00Z";

    double time2(reftime=124, time2=41);
      :units = "Hour since 2019-10-20T00:00:00Z";

  float Pressure_surface(reftime=124, time2=41, y=65, x=93);
      :units = "Pa";
      :coordinates = "reftime time2 y x ";

The main issue here, I think, is with the use of time1 (and other multidimensional coordinate variables), specifically that the variable time1 is two dimensional, and there exists a dimension with the same name. The CF spec says:

We recommend that the name of a multidimensional coordinate variable should not match the name of any of its dimensions because that precludes supplying a coordinate variable for the dimension. This practice also avoids potential bugs in applications that determine coordinate variables by only checking for a name match between a dimension and a variable and not checking that the variable is one dimensional.

(see http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/cf-conventions.html#coordinate-system).

Strictly speaking, what we do here is CF compliant, but it can cause confusion for clients that do not check to see if the variable of a matching variable/dimension name pair meet the requirement of being a coordinate variable (that is, the variable is 1D), and so not recommended.

One thing we could do to help clarify the situation is to simply rename the time* dimensions to something like time*_dim, or rename the time* variable to something like valid_time*.

As a side note, if we did change the name of either the dimensions or the variables, it might be nice if we could introduce a new 1D variable with the same dimension name that is something simply like "record_number". That's ugly, but what it would do would allow OPeNDAP to stop exposing a 2D Map for the variables in our "Full" GRIB collections, which is very much not allowed by the OPeNDAP spec (The Maps for a Grid must be 1D). I'll open an issue over on https://github.com/Unidata/tds to capture that bug, as that's certainly a bug related to the CDM -> DAP2 data model translation.

dap4.test.TestNc4Iosp failure on Jenkins

We have a new failure on Jenkins for dap4.test.TestNc4Iosp.testNc4Iosp. This started showing up when the way we load IOSPs changed with PR 101 (see https://github.com/Unidata/netcdf-java/pull/101/files for the changes).

My suspicion is that the change caused the dap4 library to use the Hdf5Iosp instead of the Nc4Iosp. The code should handle both cases, but I think what we see is that there is a difference in the way the way the two IOSPs see variable metadata (in this case, something about the way enum is handled). Here is the output from Jenkins:

Netcdf-c library version: 4.6.1 of Mar 30 2018 02:30:19 $
Testcase: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_var.nc
Testpath: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_var.nc
Baseline: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/TestIosp/baseline/test_one_var.nc.nc4
DMR Comparison:
Files are Identical
DATA Comparison:
Files are Identical
Testcase: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_vararray.nc
Testpath: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_vararray.nc
Baseline: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/TestIosp/baseline/test_one_vararray.nc.nc4
DMR Comparison:
Files are Identical
DATA Comparison:
Files are Identical
Testcase: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_atomic_types.nc
Testpath: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_atomic_types.nc
Baseline: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/TestIosp/baseline/test_atomic_types.nc.nc4
DMR Comparison:
>>>> 18 CHANGED FROM
    enum cloud_class_t primary_cloud;
>>>>     CHANGED TO
    enum primary_cloud primary_cloud;
>>>> 20 CHANGED FROM
    enum cloud_class_t secondary_cloud;
>>>>     CHANGED TO
    enum secondary_cloud secondary_cloud;
>>>> Dap4 Testing: End of differences.

Split cdm module

Related to #105

Split the cdm module into the following new modules:

  • cdm-base (netcdf3, netcdf4, hdf5, hdf4)
  • cdm-radial
  • cdm-image (was clcommon)
  • cdm-misc
    • geotiff
    • dmsp
    • grads
    • misc
    • noaa
    • shapefile (move from uibase)

ucar.unit.prefs and the TDS

The removal of ucar.unit.prefs removed from cdm impacted the TDS (Unidata/tds#23) (oops). I need to setup a Jenkins run (hopefully tomorrow) to check PRs against netCDF-Java to make sure the TDS still at least compiles (not a show stopper for a PR, but at least we'd have a heads-up), but for now, this issue exists. Maybe the TDS does not need to be storing preferences this way, so one option is to stop the TDS from doing that. If it's needed, then we can do one of a few things:

  1. Copy the same code into the TDS codebase (barf)
  2. Have TDS depend on uibase (feels dirty)
  3. Create a new utility focused module (meh...)
  4. Move ucar.unit.prefs from uibase to an existing module
    • maybe clcommon, although that does not feel great because clcommon โ†’ "Client-side common library" (but TDS depends on that already and it's not client-side, so maybe it's not so bad?)

Exception opening remote GRIB datasets

@lesserwhirls, I'm trying to determine if an issue related loading remote GRIB files was ever addressed. Specifically I'm looking at Unidata/thredds#797

I had a user write in today about trying to load remote GRIB-2 files and I found it was reporting the severe "reading/Creating gbx9 index for file" exception in Grib2CollectionBuilder.

I had a dim recollection that I'd encountered trouble in the past with loading remote GRIB-2 files, and der Google led me right to Unidata/thredds#797

Although my prior issue was using NJ 4.6, I used the latest 5.3 SNAPSHOT today. @cofinoa commented previously that the problem did not occur with NJ 4.2 but that things got broken in 4.3.

Confusingly, there were also some GRIB-2 files in the same remote directory that Grib2Iosp wouldn't claim, and so they were instead reported as "not a valid CDM file".

5.x Client Catalog API and Docs

API docs need to be added to several of the classes down in thredds.catalog.client. For example, in:

https://github.com/Unidata/netcdf-java/blob/master/cdm/core/src/main/java/thredds/client/catalog/Catalog.java

The docs should at least point out that you need to use the builders:

https://github.com/Unidata/netcdf-java/tree/master/cdm/core/src/main/java/thredds/client/catalog/builder

(specifically CatalogBuilder for those wanting to read a THREDDS Client Catalog).

Also need to move https://github.com/Unidata/netcdf-java/blob/master/docs/src/private/website/netcdf-java/reference/ThreddsCatalogs.adoc into the main documentation set, as well as beef it up with some tangible examples (e.g. working with THREDDS Metadata, following catalogRef's).

Need to add thredds.catalog.client to the public API generation so that it is accessible via https://docs.unidata.ucar.edu/netcdf-java/<version>/javadoc/

Upgrade to current gradle release

Upgrade the project to use the latest version of gradle (currently using v3.5.1, latest is v5.6.2). This will allow us to at least attempt to build and test the project with Java 11 (although still only using Java 8 features).

Source code for version 4.6.14

  • version with which you are encountering an issue: 4.6.14
  • environmental information: N/A
  • a description:
    I'm using netcdf-java in a project but I encountered issues with overwritted google-guava dependencies. It seems to be hidden in netcdf but not at the same version of my project.
    Thus, I'm looking for the codebase of the 4.6.14 to double check the dependencies and versions. Where is it? Moreover, could you explain a bit on the build process so I could devise how to overcome the library collision I face.

Note: the release note (here: https://www.unidata.ucar.edu/blogs/news/entry/netcdf-java-library-and-tds9) point to this repo, but there is no tag for 4.6.14

If you have a general question about the software, please view our Suggested Support Process.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.