Git Product home page Git Product logo

schemacrawler / schemacrawler Goto Github PK

View Code? Open in Web Editor NEW
1.6K 47.0 195.0 79.28 MB

Free database schema discovery and comprehension tool

Home Page: http://www.schemacrawler.com/

License: Other

Java 65.80% Clean 0.12% HTML 32.32% Batchfile 0.07% Shell 0.08% JavaScript 0.04% Groovy 0.01% Python 0.22% Ruby 0.01% CSS 0.06% FreeMarker 0.01% PLSQL 0.16% SQLPL 0.13% Dockerfile 0.05% Roff 0.14% Mustache 0.01% TSQL 0.76% PLpgSQL 0.01%
schemacrawler jdbc database-diagrams schema java database database-document database-documentation database-schema documentation

schemacrawler's Introduction

GitHub Repo stars Quick Build Integration Tests codecov Codacy Badge

The Central Repository Main distribution Docker Pulls Scoop Chocolatey

SchemaCrawler

Note:

About

SchemaCrawler is a free database schema discovery and comprehension tool. SchemaCrawler has a good mix of useful features for data governance. You can search for database schema objects using regular expressions, and output the schema and data in a readable text format. The output serves for database documentation, and is designed to be diff-ed against other database schemas. SchemaCrawler also generates schema diagrams. You can execute scripts in any standard scripting language against your database. You can find potential schema design issues with lint.

SchemaCrawler supports almost any database that has a JDBC driver, but for convenience is bundled with drivers for some commonly used RDBMS systems. SchemaCrawler works with any operating system that supports Java SE 8 or better.

SchemaCrawler is also a Java API that makes working with database metadata as easy as working with plain old Java objects.

Licensing

SchemaCrawler is available under a number of licenses.

Distributions and Downloads

Explore the SchemaCrawler command-line with a live online tutorial.

SchemaCrawler is available in a number of formats, including a download with examples, source code examples, plugin starters, a Maven reporting plugin, Docker containers, operating system specific installers, and jars on The Central Repository. For a complete list, see downloads and distributions.

Support

Please get support on Stack Overflow, following the Guidelines for Support.

schemacrawler's People

Contributors

dependabot[bot] avatar schemacrawler avatar sualeh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

schemacrawler's Issues

Exact meaning of surrogate key

Hi @sualeh , sorry for asking you once more the same question but it seems like we do not really understant the exact meaning of the surrogate key lint.

Could you help us by adding additional explanations on your website about this lint ?

Thank you in advance for your help.

Adrien & Michèle

How to generate diagram with schemacrawler-maven-plugin?

I am struggling to find the correct configuration between pom and property files to generate a diagram.
Is it possible to do so or is the Maven plugin not able to generate the diagram?

I'm using these versions:

<groupId>net.sourceforge.schemacrawler</groupId>
<artifactId>schemacrawler-maven-plugin</artifactId>
<version>9.5</version>
<groupId>us.fatehi</groupId>
<artifactId>schemacrawler</artifactId>
<version>12.06.03</version>        (the latest version I found that works with Java 7)

[Continuous build] Log improvment : adding lint summary on build failure

Hello Sualeh,

First of all, thanks for the "dispatch" feature you've add last week, continuous DB quality check is now running on our projects on github/travis, it's really cool.

Here's the proof : ;)

selection_046

Detailed logs below :

selection_047

We also use Slack (https://slack.com/) as instant messaging and its famous notifications. Each time, a build is performed on Travis the result is sent to Slack and displayed in the project channel.

selection_049

As you can see, we customize the icon for the build based on Schemacraler ;)

Enhancement proposal

Now, we realize that some enhancements could be done regarding the logs and especially the build result.
In fact when the build fails, the only readable message we have is Too many lints were found, but we didn't get details (tables, column, lint name...) about what causes the build to fail. Even a summary like the result of the command mvn test could be helpful.


Mar 08, 2016 5:46:11 AM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
INFO: Checking public.pieceafournir.commentaire...
Mar 08, 2016 5:46:11 AM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
INFO: Checking public.pieceafournir.libelle...
Mar 08, 2016 5:46:11 AM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
INFO: Checking public.pieceafournir.justification...
Mar 08, 2016 5:46:11 AM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
INFO: Checking public.rendez_vous.commentaire...
Mar 08, 2016 5:46:11 AM schemacrawler.tools.iosource.FileOutputResource openNewOutputWriter
INFO: Opened output writer to file, /home/travis/build/DSI-Ville-Noumea/pdc-liquibase/pdc_lints.html
Mar 08, 2016 5:46:11 AM schemacrawler.tools.iosource.OutputWriter close
INFO: Closing output writer
Mar 08, 2016 5:46:12 AM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
SEVERE: Too many schema lints were found
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18.469 s
[INFO] Finished at: 2016-03-08T05:46:12+00:00
[INFO] Final Memory: 6M/84M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.4.0:exec (default-cli) on project pdc-liquibase: Command execution failed. Process exited with an error: 1 (Exit value: 1) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

travis_time:end:2c0057b8:start=1457415950401687923,finish=1457415972324604875,duration=21922916952
�[0K
�[31;1mThe command "mvn exec:exec" failed and exited with 1 during .�[0m

Your build has been stopped.

So we think about something like that :

Results :

Failed lints: 
io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize : table1.column1, table3.column3 ...
io.github.mbarre.schemacrawler.tool.linter.LinterTableWithNoPrimaryKey : table2

Lints run: 17, Failures: 2

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 29.800 s
[INFO] Finished at: 2016-03-09T11:24:43+11:00
[INFO] Final Memory: 25M/295M
[INFO] ------------------------------------------------------------------------

Let us know what you think about it ?

[Config file] <table-exclusion-pattern> value is lost when java method configure is overridden

I tried to exclude some tables from the lints.

I realized that my linter java classes that override the method configure ignore the <table-exclusion-pattern> value in the config file.

  <linter id="io.github.mbarre.schemacrawler.tool.linter.LinterColumnContentNotNormalized">
        <run>true</run>
        <table-exclusion-pattern><![CDATA[pdc_adm\.((databasechangeloglock)|(databasechangelog))]]></table-exclusion-pattern>
      <config>
           <property name="nbRepeatTolerance">2</property>
           <property name="minTextColumnSize">2</property>
      </config>
   </linter>
    @Override
    public void configure(LinterConfig linterConfig) {
        nbRepeatTolerance = linterConfig.getConfig().getIntegerValue(NB_REPEAT_TOLERANCE_CONFIG, NB_REPEAT_TOLERANCE);
        minTextColumnSize = linterConfig.getConfig().getIntegerValue(MIN_TEXT_COLUMN_SIZE_CONFIG, MIN_TEXT_COLUMN_SIZE);
    }

If I comment the method configure the <table-exclusion-pattern> is taken account.

Schema crawling

Hello, Sualeh

As I understood first schema crawler searches catalog by regexps and then schema should be retrieved by calling Catalog.lookupSchema. I use includeFilter "^?" + table_name + "?$" since table_name could be with or without apostrophe. For example, if I try to crawl DB with name "new", I retrieve catalog "new", but lookupSchema returns empty optional. Is it possible to use lookupSchema with regexp?

Thank you.

Support for optionally retrieving reserved words

We ran into a case where the database generated an exception when attempting to fetch the reserved words. It ended up being an issue related to database permissions. It would be great if this could be an optional feature, possibly exposed through the DatabaseSpecificOverrideOptions class.

Run graph from command line without arguments (-g=<config-file>)

Goal

Be able to generate a graph (or lint) from a single and short schemacrawler command line : ideally, just running :

schemacrawler

would use the conf file to generate a graph or lint

Description

Following the command line doc :

schemacrawler -g=schemacrawler.config.properties

or

schemacrawler -g

should do the trick with schemacrawler.config.properties containing :

server=postgresql
host=svi-postgres0.site-mairie.noumea.nc
port=5432
database=contratspart
user=contratspart_adm
password=******
infolevel=maximum
schemas=contratspart_adm
c=graph
outputfile=contratspart_adm.png
outputformat=png

Unfortunately, i get this message :

➜  tmp  schemacrawler -g=schemacrawler.config.properties
No database connection URL provided
Re-run SchemaCrawler with the
-?
option for help
Or, re-run SchemaCrawler with an additional
-loglevel=CONFIG
option for details on the error

with more details, i get :

option for details on the error
Feb 09, 2016 10:11:48 AM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: SchemaCrawler, v14.05.05
Feb 09, 2016 10:11:48 AM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: Command line: 
-g=schemacrawler.config.properties
-loglevel=CONFIG
Feb 09, 2016 10:11:48 AM us.fatehi.commandlineparser.CommandLineUtility logFullStackTrace
SEVERE: No database connection URL provided
schemacrawler.schemacrawler.SchemaCrawlerCommandLineException: No database connection URL provided
        at schemacrawler.tools.commandline.CommandLineConnectionOptionsParser.loadConfig(CommandLineConnectionOptionsParser.java:57)
        at schemacrawler.tools.commandline.SchemaCrawlerCommandLine.parseConnectionOptions(SchemaCrawlerCommandLine.java:188)
        at schemacrawler.tools.commandline.SchemaCrawlerCommandLine.<init>(SchemaCrawlerCommandLine.java:91)
        at schemacrawler.Main.main(Main.java:78)

it really seems to expect a jdbc url whereas all db connexion properties have been provided.

Any idea on what i'm doig wrong ?

Thank you in advance for your help.

Best Regards

PS :

➜  tmp  schemacrawler -version
SchemaCrawler 14.05.05
Database schema discovery and comprehension tool
Copyright (c) 2000-2016, Sualeh Fatehi <[email protected]>.

You can search for database schema objects using regular expressions, 
and output the schema and data in a readable text format. You can find 
potential schema design issues with lint. The output serves for 
database documentation is designed to be diff-ed against other database 
schemas. SchemaCrawler also generates schema diagrams.

Error generating diagram from Oracle Schema in pdf format

Congratulations for your job!!!

Feb 12, 2016 10:46:41 AM sf.util.DatabaseUtility executeSql
WARNING: Error executing SQL, SELECT /*+ PARALLEL(AUTO) */
NULL AS TABLE_CATALOG,
TABLES.OWNER AS TABLE_SCHEMA,
TABLES.TABLE_NAME,
DBMS_METADATA.GET_DDL('TABLE', TABLES.TABLE_NAME, TABLES.OWNER)
AS TABLE_DEFINITION
FROM
ALL_TABLES TABLES
INNER JOIN ALL_USERS USERS
ON TABLES.OWNER = USERS.USERNAME
WHERE
USERS.USERNAME NOT IN
('ANONYMOUS', 'APEX_PUBLIC_USER', 'BI', 'CTXSYS', 'DBSNMP', 'DIP',
'EXFSYS', 'FLOWS_30000', 'FLOWS_FILES', 'HR', 'IX', 'LBACSYS',
'MDDATA', 'MDSYS', 'MGMT_VIEW', 'OE', 'OLAPSYS', 'ORACLE_OCM',
'ORDPLUGINS', 'ORDSYS', 'OUTLN', 'OWBSYS', 'PM', 'SCOTT', 'SH',
'SI_INFORMTN_SCHEMA', 'SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR',
'SYS', 'SYSMAN', 'SYSTEM', 'TSMSYS', 'WKPROXY', 'WKSYS', 'WK_TEST',
'WMSYS', 'XDB', 'XS$NULL')
AND NOT REGEXP_LIKE(USERS.USERNAME, '^APEX_[0-9]{6}$')
AND NOT REGEXP_LIKE(USERS.USERNAME, '^FLOWS_[0-9]{5}$')
AND TABLES.TABLE_NAME NOT LIKE 'BIN$%'
ORDER BY
TABLE_SCHEMA,
TABLE_NAME;

java.sql.SQLException: ORA-31603: object "TOAD_PLAN_TABLE" of type TABLE not found in schema "CUA_DATA_PRO"
ORA-06512: at "SYS.DBMS_METADATA", line 5088
ORA-06512: at "SYS.DBMS_METADATA", line 7589
ORA-06512: at line 1

This error is because TOAD_PLAN_TABLE is a global temporary table of user/schema CUA_DATA_PRO.

It would be useful a flag for excluding Oracle temporary tables that when you use it in this query add the clause "AND TEMPORARY!='Y'"

Thank you very much

14.01.02 > SchemaInfoLevel.standard() not compatible

Hi Sualeh,

While migrating lint code from 12.06.02 to 14.01.02... and also from JDK-1.7 to JDK-1.8

i get a compilation issue on the following code :

options.setSchemaInfoLevel(SchemaInfoLevel.standard());

What should i put instead of this please ?

Do you have a default setRetrieve equivalent to the old standard() one ?

Thank you in advance for your help.

SchemaCrawler not showing tables on Amazon RDS, only views.

Hi,

I have a problem with using your tool with an instance of MySQL 5.6 running on Amazon RDS. It only shows views from some other database, while there are also tables on the django database in which I'm primarily interested.

This is how I lunch the SchemaCrawler on windows:

C:\tools\schemacrawler-14.05.01-main\_schemacrawler>schemacrawler.cmd -server=mysql -host="xxxxx.xxxxxxxxx.eu-west-1.rds.amazonaws.com" 
-port=3306 -database=django 
-user=xxxxx -password=xxXXXXxxxXX 
-c=list -infolevel=standard -loglevel=INFO

It results with:

(sorry for lack of formatting, it's lunched from a windows command prompt)

lis 27, 2015 1:19:58 AM us.fatehi.commandlineparser.CommandLineUtility logSafeAr
guments
INFO: SchemaCrawler, v14.05.01
lis 27, 2015 1:19:58 AM us.fatehi.commandlineparser.CommandLineUtility logSafeAr
guments
INFO: Command line:
-server=mysql
-host=xxxxx.xxXXXxxx.eu-west-1.rds.amazonaws.com
-port=3306
-database=django
-user=xxxx
-password=*****
-c=list
-infolevel=standard
-loglevel=INFO
lis 27, 2015 1:19:58 AM schemacrawler.tools.commandline.SchemaCrawlerCommandLine
 <init>
INFO: Using database plugin, mysql - MySQL
lis 27, 2015 1:19:59 AM schemacrawler.tools.commandline.SchemaCrawlerOptionsPars
er getOptions
WARNING: Please provide a -schemas option for efficient retrieval of database me
tadata
lis 27, 2015 1:19:59 AM schemacrawler.schemacrawler.BaseDatabaseConnectionOption
s getConnection
INFO: Making connection to jdbc:mysql://xxxxx.xxXXXxxx.eu-west-1.rds.amazo
naws.com:3306/django?logger=Jdk14Logger&dumpQueriesOnException=true&dumpMetadata
OnColumnNotFound=true&profileSQL=true&maxQuerySizeToLog=4096
for user 'xxxx', with properties {}
lis 27, 2015 1:19:59 AM schemacrawler.schemacrawler.BaseDatabaseConnectionOption
s getConnection
INFO: Opened database connection, org.mariadb.jdbc.MySQLConnection@60215eee
lis 27, 2015 1:19:59 AM schemacrawler.schemacrawler.BaseDatabaseConnectionOption
s logConnection
INFO: Connected to
MySQL 5.6.23-log
using JDBC driver
MariaDB connector/J 1.2.3
lis 27, 2015 1:19:59 AM schemacrawler.tools.executable.BaseStagedExecutable exec
ute
INFO: Executing SchemaCrawler command, "list"
lis 27, 2015 1:19:59 AM schemacrawler.crawl.SchemaCrawler crawlSchemas
INFO: Crawling schemas
lis 27, 2015 1:19:59 AM schemacrawler.crawl.SchemaRetriever retrieveAllSchemas
INFO: Retrieving all schemas
lis 27, 2015 1:19:59 AM schemacrawler.crawl.SchemaRetriever retrieveAllCatalogs
INFO: Retrieving all catalogs
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlSchemas
INFO: Total time taken for "crawlSchemas" - 00:00:00.055 hours
-  0,0% - 00:00:00.054 - "retrieveSchemas"
-  0,0% - 00:00:00.001 - "sortAndFilterSchemas"

lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Crawling SchemaCrawler information
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving database information
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler lambda$crawlDatabaseIn
fo$3
INFO: Not retrieving additional database information, since this was not request
ed
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving JDBC driver information
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler lambda$crawlDatabaseIn
fo$5
INFO: Not retrieving additional JDBC driver information, since this was not requ
ested
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving SchemaCrawler crawl information
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Total time taken for "crawlDatabaseInfo" - 00:00:00.018 hours
-  0,0% - 00:00:00.000 - "retrieveDatabaseInfo"
-  0,0% - 00:00:00.002 - "retrieveAdditionalDatabaseInfo"
-  0,0% - 00:00:00.001 - "retrieveJdbcDriverInfo"
-  0,0% - 00:00:00.002 - "retrieveAdditionalJdbcDriverInfo"
-  0,0% - 00:00:00.013 - "retrieveCrawlHeaderInfo"

lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlColumnDataTypes
INFO: Crawling column data types
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler lambda$crawlColumnData
Types$0
INFO: Retrieving system column data types
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler lambda$crawlColumnData
Types$1
INFO: Not retrieving user column data types, since this was not requested
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlColumnDataTypes
INFO: Total time taken for "crawlColumnDataTypes" - 00:00:00.031 hours
-  0,0% - 00:00:00.029 - "retrieveSystemColumnDataTypes"
-  0,0% - 00:00:00.002 - "retrieveUserDefinedColumnDataTypes"

lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Crawling tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
lis 27, 2015 1:20:00 AM schemacrawler.crawl.TableColumnRetriever retrieveColumns

INFO: Retrieving table columns
lis 27, 2015 1:20:00 AM schemacrawler.crawl.ForeignKeyRetriever retrieveForeignK
eys
INFO: Retrieving foreign keys, using database metadata
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler lambda$crawlTables$21
INFO: Retrieving primary keys and indexes
lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Total time taken for "crawlTables" - 00:00:00.674 hours
-  0,0% - 00:00:00.492 - "retrieveTables"
-  0,0% - 00:00:00.172 - "retrieveColumns"
-  0,0% - 00:00:00.002 - "retrieveForeignKeys"
-  0,0% - 00:00:00.007 - "sortAndFilterTables"
-  0,0% - 00:00:00.001 - "retrieveIndexes"
-  0,0% - 00:00:00.000 - "retrieveTableConstraintInformation"
-  0,0% - 00:00:00.000 - "retrieveTriggerInformation"
-  0,0% - 00:00:00.000 - "retrieveViewInformation"
-  0,0% - 00:00:00.000 - "retrieveTableDefinitions"
-  0,0% - 00:00:00.000 - "retrieveIndexInformation"
-  0,0% - 00:00:00.000 - "retrieveAdditionalTableAttributes"
-  0,0% - 00:00:00.000 - "retrieveTablePrivileges"
-  0,0% - 00:00:00.000 - "retrieveAdditionalColumnAttributes"
-  0,0% - 00:00:00.000 - "retrieveTableColumnPrivileges"

lis 27, 2015 1:20:00 AM schemacrawler.crawl.SchemaCrawler crawlRoutines
INFO: Crawling routines
lis 27, 2015 1:20:00 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, bifrost
lis 27, 2015 1:20:00 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, bifrost
lis 27, 2015 1:20:00 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, django
lis 27, 2015 1:20:00 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, django
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, github_selected
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, github_selected
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, information_schema
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, information_schema
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, innodb
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, innodb
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, mysql
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, mysql
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, performance_schema
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, performance_schema
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, stack
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, stack
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, tmp
lis 27, 2015 1:20:01 AM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, tmp
lis 27, 2015 1:20:01 AM schemacrawler.crawl.SchemaCrawler lambda$crawlRoutines$8

INFO: Retrieving routine columns
lis 27, 2015 1:20:03 AM schemacrawler.crawl.SchemaCrawler crawlRoutines
INFO: Total time taken for "crawlRoutines" - 00:00:02.238 hours
-  0,0% - 00:00:00.979 - "retrieveRoutines"
- 50,0% - 00:00:01.255 - "retrieveRoutineColumns"
-  0,0% - 00:00:00.004 - "sortAndFilterRoutines"
-  0,0% - 00:00:00.000 - "retrieveRoutineInformation"

lis 27, 2015 1:20:03 AM schemacrawler.crawl.SchemaCrawler crawlSynonyms
INFO: Not retrieving synonyms, since this was not requested
lis 27, 2015 1:20:03 AM schemacrawler.crawl.SchemaCrawler crawlSequences
INFO: Not retrieving sequences, since this was not requested
lis 27, 2015 1:20:03 AM schemacrawler.tools.executable.SchemaCrawlerExecutable e
xecuteOn
INFO: Executing command "list" using executable schemacrawler.tools.text.schema.
SchemaTextExecutable
lis 27, 2015 1:20:03 AM schemacrawler.tools.iosource.ConsoleOutputResource openN
ewOutputWriter
INFO: Opened output writer to console


System Information
========================================================================

generated by                              SchemaCrawler 14.05.01
generated on                              2015-11-27 01:20:00
database version                          MySQL 5.6.23-log
driver version                            MariaDB connector/J 1.2.3



Tables
========================================================================

github_selected.users_passions_view                               [view]
  VIEW
github_selected.users_tags_view                                   [view]
  VIEW
github_selected.users_wants_learn_view                            [view]
  VIEW



Routines
========================================================================

mysql.rds_collect_global_status_history           [procedure, no result]
mysql.rds_disable_gsh_collector                   [procedure, no result]
mysql.rds_disable_gsh_rotation                    [procedure, no result]
mysql.rds_enable_gsh_collector                    [procedure, no result]
mysql.rds_enable_gsh_rotation                     [procedure, no result]
mysql.rds_external_master                         [procedure, no result]
mysql.rds_innodb_buffer_pool_dump_now             [procedure, no result]
mysql.rds_innodb_buffer_pool_load_abort           [procedure, no result]
mysql.rds_innodb_buffer_pool_load_now             [procedure, no result]
mysql.rds_kill                                    [procedure, no result]
mysql.rds_kill_query                              [procedure, no result]
mysql.rds_next_master_log                         [procedure, no result]
mysql.rds_reset_external_master                   [procedure, no result]
mysql.rds_rotate_general_log                      [procedure, no result]
mysql.rds_rotate_global_status_history            [procedure, no result]
mysql.rds_rotate_slow_log                         [procedure, no result]
mysql.rds_set_configuration                       [procedure, no result]
mysql.rds_set_external_master                     [procedure, no result]
mysql.rds_set_gsh_collector                       [procedure, no result]
mysql.rds_set_gsh_rotation                        [procedure, no result]
mysql.rds_show_configuration                      [procedure, no result]
mysql.rds_skip_repl_error                         [procedure, no result]
mysql.rds_start_replication                       [procedure, no result]
mysql.rds_stop_replication                        [procedure, no result]

lis 27, 2015 1:20:03 AM schemacrawler.tools.iosource.OutputWriter close
INFO: Not closing output writer, since output is to an externally provided write
r

C:\tools\schemacrawler-14.05.01-main\_schemacrawler>

It shows internal rds procedures and views from github_selected (btw, there are also tables in github_selected!), while I'm interested only in tables from the django database.

Help appreciated,

Best Regards,
Oskar

Is possible to connect with Teiid JDV?

Dear,

I do not know if this is the ideal channel to ask that question, but I thank you for understanding.

I have used the schema crawler to discover Postgres and MySQL databases structures.

I am interested in doing the same job with the Teiid (http://teiid.jboss.org/) through the specific connector JDBC.

I wonder if anyone has ever done that? And what steps should be taken?

On the next page (http://sualeh.github.io/SchemaCrawler/plugins.html) it was not clear if'll have customizes the generated code with the specificities of Teiid. At the moment I do not have intimate knowledge of how the Teiid works.

Drivers versions not corresponding to the expected ones

The drivers versions are not the expected ones in the zip file :

wget https://github.com/sualeh/SchemaCrawler/releases/download/v14.06.01/schemacrawler-14.06.01-main.zip
md5sum schemacrawler-14.06.01-main.zip
# 6258aa8bebd78ce6e31190685db6557c  schemacrawler-14.06.01-main.zip
unzip schemacrawler-14.06.01-main.zip
ls -la schemacrawler-14.06.01-main/_schemacrawler/lib 
total 10208
drwxr-xr-x 2 salad74 salad74    4096 Feb  8 20:49 .
drwxr-xr-x 3 salad74 salad74    4096 Feb  8 20:49 ..
-rw-r--r-- 1 salad74 salad74 1687688 Feb  8 20:49 h2-1.4.184.jar
-rw-r--r-- 1 salad74 salad74 1493168 Feb  8 20:49 hsqldb-2.3.3.jar
-rw-r--r-- 1 salad74 salad74  317816 Feb  8 20:49 jtds-1.3.1.jar
-rw-r--r-- 1 salad74 salad74  294317 Feb  8 20:49 mariadb-java-client-1.2.3.jar
-rw-r--r-- 1 salad74 salad74  960372 Feb  8 20:49 mysql-connector-java-5.1.34.jar
-rw-r--r-- 1 salad74 salad74  592322 Feb  8 20:49 postgresql-9.3-1102-jdbc41.jar
-rw-r--r-- 1 salad74 salad74  592416 Feb  8 20:49 schemacrawler-14.06.01.jar
-rw-r--r-- 1 salad74 salad74  148772 Feb  8 20:49 schemacrawler-api-14.06.01-tests.jar
-rw-r--r-- 1 salad74 salad74    8847 Feb  8 20:49 schemacrawler-db2-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    6347 Feb  8 20:49 schemacrawler-h2-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    7582 Feb  8 20:49 schemacrawler-hsqldb-14.06.01.jar
-rw-r--r-- 1 salad74 salad74   76566 Feb  8 20:49 schemacrawler-lint-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    6540 Feb  8 20:49 schemacrawler-mariadb-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    6472 Feb  8 20:49 schemacrawler-mysql-14.06.01.jar
-rw-r--r-- 1 salad74 salad74   12639 Feb  8 20:49 schemacrawler-offline-14.06.01.jar
-rw-r--r-- 1 salad74 salad74   17241 Feb  8 20:49 schemacrawler-oracle-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    7346 Feb  8 20:49 schemacrawler-postgresql-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    6205 Feb  8 20:49 schemacrawler-sqlite-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    7815 Feb  8 20:49 schemacrawler-sqlserver-14.06.01.jar
-rw-r--r-- 1 salad74 salad74    5386 Feb  8 20:49 schemacrawler-sybaseiq-14.06.01.jar
-rw-r--r-- 1 salad74 salad74 3534290 Feb  8 20:49 sqlite-jdbc-3.7.8.jar
-rw-r--r-- 1 salad74 salad74    7188 Feb  8 20:49 xmlpull-1.1.3.1.jar
-rw-r--r-- 1 salad74 salad74   24956 Feb  8 20:49 xpp3_min-1.1.4c.jar
-rw-r--r-- 1 salad74 salad74  531571 Feb  8 20:49 xstream-1.4.7.jar

as you can see the drivers versions are not the expected ones, do they ?

LintDispatch > Build always succeed

Hi Sualeh,

I've just updated to v14.07.07.
I seems that critical lint hits are not detected any more, and the build never fails.

I join my test project, where I set LinterTableWithNoPrimaryKey to critical and create a table without primary key.

Let me know if I missed something.

Thanks.

GRAVE: Abnormal system termination, since a critical schema lint was found message even if dispatch is not configured

I have the follwing message in logs GRAVE: Abnormal system termination, since a critical schema lint was found even if I don't configure dispatch.

If I call schemacrawler with Exec Maven Plugin, build finish successfully but I have

....
févr. 29, 2016 4:02:38 PM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
INFOS: Checking pdc_adm.pieceafournir.justification...
févr. 29, 2016 4:02:38 PM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
INFOS: Checking pdc_adm.rendez_vous.commentaire...
févr. 29, 2016 4:02:38 PM schemacrawler.tools.iosource.FileOutputResource openNewOutputWriter
INFOS: Opened output writer to file, /media/barmi83/Data/projets/pdc-liquibase/pdc_adm_lints.html
févr. 29, 2016 4:02:39 PM schemacrawler.tools.iosource.OutputWriter close
INFOS: Closing output writer
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
févr. 29, 2016 4:02:39 PM schemacrawler.tools.lint.executable.LintDispatcher lambda$dispatch$2
GRAVE: Abnormal system termination, since a critical schema lint was found
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.305 s
[INFO] Finished at: 2016-02-29T16:02:39+11:00
[INFO] Final Memory: 15M/212M
[INFO] ------------------------------------------------------------------------

[Continuous build] Summary improvment

Hi Sualeh

It tested the 14.07.03 today, it works nice and the result pretty cool.

So we have some new ideas regarding the summary, here is the result we have now:

mars 14, 2016 2:06:07 PM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
mars 14, 2016 2:06:07 PM io.github.mbarre.schemacrawler.tool.linter.LinterColumnSize lint
mars 14, 2016 2:06:07 PM schemacrawler.tools.iosource.FileOutputResource openNewOutputWriter
INFOS: Opened output writer to file, /media/barmi83/Data/projets/pdc-liquibase/pdc_lints.html
mars 14, 2016 2:06:08 PM schemacrawler.tools.iosource.OutputWriter close
INFOS: Closing output writer
mars 14, 2016 2:06:08 PM schemacrawler.tools.lint.Linters dispatch
INFOS: Too many schema lints were found:
[medium] column with same name but different data types - 4
[medium] foreign key with no index - 87
[high] all data columns are nullable - 25
[medium] incrementing columns - 9
[critical] no primary key - 2
[medium] should have remarks - 75
[medium] spaces in name, or reserved word - 42
[medium] useless surrogate key - 3
[high]  should not have so many duplicates. - 14600
[high] Should be boolean type. - 76
[high] column is oversized regarding its content. - 56
mars 14, 2016 2:06:08 PM schemacrawler.tools.lint.LintDispatch$4 dispatch
GRAVE: Too many schema lints were found
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.886 s
[INFO] Finished at: 2016-03-14T14:06:08+11:00
[INFO] Final Memory: 9M/150M
[INFO] ------------------------------------------------------------------------

Could It be possible to have this :

[INFO] ------------------------------------------------------------------------ 
INFOS: Too many schema lints were found:
[INFO] ------------------------------------------------------------------------
[medium] 4 - column with same name but different data types
[medium] 87 - foreign key with no index
[medium] 9 - incrementing columns
[medium] 75 - should have remarks
[medium] 42 - spaces in name, or reserved word
[medium] 3 - useless surrogate key
[high] 14600 - should not have so many duplicates.
[high] 76 - Should be boolean type.
[high] 56 - column is oversized regarding its content.
[high] 25 - all data columns are nullable
[critical] 2/1 - no primary key
[critical] : 2, [high] : 14757, [medium] : 220
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.886 s
[INFO] Finished at: 2016-03-14T14:06:08+11:00
[INFO] Final Memory: 9M/150M
[INFO] ------------------------------------------------------------------------
  1. Ascendant sort on severity, in order to have the higher severity at the end of the summary and near BUILD FAILURE status
  2. Add a separator at the begining of the summary
  3. Display the threshold when severity is critical
  4. Move the hit number after the severity
  5. Add a summary to the summary displaying the count per severity
  6. Remove some logs like :
mars 14, 2016 2:06:08 PM schemacrawler.tools.lint.LintDispatch$4 dispatch

I have also some others questions :

I was thinking that the summary will be activated for all the linters, but I realized that only linter with

<config>
    <dispatch>terminate_system</dispatch>
    <dispatch-threshold>1</dispatch-threshold>
</config>

appears in the summary.

Is it possible to have it actived for all linter ? And only put severity level to critical and add the <config></config> if we need the build to fail.

If I don't set the loglevel the summary doesn't appears. I have to set loglevel to INFO, but a lot of other logs appears. Is there a loglevel for which only the summary appears ?

Thanks for what you've done, it really rocks :) I am looking forward your responses.

[Question] Is it possible to exclude drawing one relationship?

I am generating a database diagram and there is one table that is referenced by almost every other table (Created and edited by ids) and pollutes the diagram to such a degree that I'd rather exclude it and just manually notate it.

The only thing I was able to do in order to work around it was exclude the column, which isn't ideal.

Postgres only > Remove the function retrieval from jdbc calls

...as postgres does not implement it, ... and hence jdbc driver will not This could avoid a huge log file complaining about thinks like :

WARNING: JDBC driver does not support retrieving functions
java.sql.SQLFeatureNotSupportedException: Method org.postgresql.jdbc4.Jdbc4DatabaseMetaData.getFunction(String, String, String) is not yet implemented.
        at org.postgresql.Driver.notImplemented(Driver.java:670)
        at org.postgresql.jdbc4.AbstractJdbc4DatabaseMetaData.getFunctions(AbstractJdbc4DatabaseMetaData.java:87)
        at schemacrawler.crawl.RoutineRetriever.retrieveFunctions(RoutineRetriever.java:181)
        at schemacrawler.crawl.SchemaCrawler.lambda$crawlRoutines$8(SchemaCrawler.java:277)
        at schemacrawler.crawl.SchemaCrawler$$Lambda$45/388991153.call(Unknown Source)
        at sf.util.StopWatch.time(StopWatch.java:129)

Support for falling back to ResultSetMetaData

We have seen cases where the underlying database will fail to fetch the database schema information (i.e. schemacrawler.schema.Table.getColumns()). In the case of failures, it would be nice to fall back to the ResultSetMetaData using a simple query. For example, something along the lines of SELECT * FROM 'table' WHERE 1 = 0 would suffice.

Implementing thymeleaf templates for lint reporting

Hi @sualeh , we are beginning to investigate on custom lint reporting. Unfortunately we have no example on how to do this on lints : could you provide us some dummy example (a list of table lints should do the trick) so we can start to prototype something this week ?

Thank you in advance for your help.... we have some really cool ideas but d'ont know how to start ;-(

Question > Schemacrawler Lint and continuous integration

Context

We are using continuous quality check and integration on an always increasing number of projects, but this essentially applies to code (Java code). This is done by Travis.

In my everyday job, i have to pass lints on our integrations databases that are generated by liquibase. Hence we make database reviews on the html exported version of the lints.

The idea

What i begin to think about and would save a lot of time to me is : to run lint on commit event on gh, on Travis and call lint execution that would return an error status code when two many high are triggered, or based on certain lint filtering conditions.
This would be a huge advantage on continuous integration of our databases... and could be interest a lot of people as it would save so much time ! The lint could even generate Junit like xml files so the "testCase" could be imported in unit testing tools for example, and give a database cover status like indicator.

The approaches

The first idea i had was to trigger a maven build as it is a popular build tool, and in the same tech as schemacrawler. We could launch a

mvn schemacrawler:lint

... and would throw required exit status code... and generate the lint report in and unitesting format for example ? What do you think about this approach ? Does some piece of schemcrawler software already do the job.

14.02.02 new lint () in error

If you create the following pgsql table :

CREATE TABLE test_xml
(
  id integer NOT NULL, -- primary key
  content character varying(100), -- column with non xml data
  content_xml xml, -- column with xml
  CONSTRAINT pk_test PRIMARY KEY (id)
)

The new lint will say :

{
    "severity": "medium",
    "description": "table with all nullable columns",
    "id": "schemacrawler.tools.linter.LinterTableAllNullableColumns",
    "value": "test_xml"
  }

... but the first PK column cannot be null... and still it is detected by this new lint. Does it really have the correct behavior ?

I've just discovered that as it was creating a regression in iur additional-lints 😄

NullPointerException: Name is null

Mar 23, 2016 1:08:45 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: SchemaCrawler, v14.06.05
Mar 23, 2016 1:08:45 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: Command line: 
-server=postgresql
-database=drank
-user=postgres
-password=*****
-infolevel=standard
-routines=
-command=lint
-linterconfigs=lint.xml
-outputformat=html
-outputfile=lint.html
-loglevel=CONFIG
-linterconfigs=lint.xml
-outputformat=html
-outputfile=lint.html
Mar 23, 2016 1:08:45 PM us.fatehi.commandlineparser.CommandLineUtility logSystemProperties
CONFIG: System properties: 
java.awt.graphicsenv=sun.awt.CGraphicsEnvironment
java.awt.printerjob=sun.lwawt.macosx.CPrinterJob
java.class.version=52.0
java.endorsed.dirs=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/endorsed
java.ext.dirs=/Users/Brandon/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre
java.io.tmpdir=/var/folders/bl/bqwbhwg939b1n19n4znq53f40000gn/T/
java.runtime.name=Java(TM) SE Runtime Environment
java.runtime.version=1.8.0_25-b17
java.specification.name=Java Platform API Specification
java.specification.vendor=Oracle Corporation
java.specification.version=1.8
java.vendor=Oracle Corporation
java.vendor.url=http://java.oracle.com/
java.vendor.url.bug=http://bugreport.sun.com/bugreport/
java.version=1.8.0_25
java.vm.info=mixed mode
java.vm.name=Java HotSpot(TM) 64-Bit Server VM
java.vm.specification.name=Java Virtual Machine Specification
java.vm.specification.vendor=Oracle Corporation
java.vm.specification.version=1.8
java.vm.vendor=Oracle Corporation
java.vm.version=25.25-b02
os.arch=x86_64
os.name=Mac OS X
os.version=10.11.4
Mar 23, 2016 1:08:45 PM us.fatehi.commandlineparser.CommandLineUtility logSystemProperties
CONFIG: Classpath: 
./postgresql.jar
./schemacrawler-postgresql.jar
./schemacrawler.jar
Mar 23, 2016 1:08:45 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, postgresql=schemacrawler.server.postgresql.PostgreSQLDatabaseConnector
Mar 23, 2016 1:08:45 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry logRegisteredJdbcDrivers
CONFIG: Registered JDBC drivers, [org.postgresql.Driver 9.4]
Mar 23, 2016 1:08:45 PM schemacrawler.tools.commandline.SchemaCrawlerCommandLine <init>
INFO: Using database plugin, postgresql - PostgreSQL
Mar 23, 2016 1:08:45 PM schemacrawler.schemacrawler.Config loadProperties
CONFIG: Cannot load properties from file, /Users/Brandon/Dropbox/Projects/Pilcrow/Drank/Code/SQL/schemacrawler/schemacrawler.config.properties
Mar 23, 2016 1:08:45 PM schemacrawler.schemacrawler.Config loadProperties
CONFIG: Cannot load properties from file, /Users/Brandon/Dropbox/Projects/Pilcrow/Drank/Code/SQL/schemacrawler/schemacrawler.additional.config.properties
Mar 23, 2016 1:08:45 PM schemacrawler.tools.commandline.SchemaCrawlerOptionsParser getOptions
WARNING: Please provide a -schemas option for efficient retrieval of database metadata
Mar 23, 2016 1:08:45 PM schemacrawler.tools.commandline.SchemaCrawlerOptionsParser logOverride
INFO: Overriding routines inclusion rule from command-line to ExcludeAll
Mar 23, 2016 1:08:45 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions getConnection
WARNING: Database password is not provided
Mar 23, 2016 1:08:45 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions getConnection
INFO: Making connection to jdbc:postgresql://localhost:5432/drank?ApplicationName=SchemaCrawler
for user 'postgres', with properties {}
Mar 23, 2016 1:08:45 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions getConnection
INFO: Opened database connection, org.postgresql.jdbc.PgConnection@7382f612
Mar 23, 2016 1:08:45 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions logConnection
INFO: Connected to 
PostgreSQL 9.5.1 
using JDBC driver 
PostgreSQL Native Driver PostgreSQL 9.4.1207
Mar 23, 2016 1:08:45 PM schemacrawler.tools.executable.BaseStagedExecutable execute
INFO: Executing SchemaCrawler command, "lint"
Mar 23, 2016 1:08:45 PM schemacrawler.tools.executable.BaseStagedExecutable execute
CONFIG: schemacrawler.schemacrawler.SchemaCrawlerOptions@5577140b[
  childTableFilterDepth: 0
  columnInclusionRule: IncludeAll
  grepColumnInclusionRule: null
  grepDefinitionInclusionRule: null
  grepInvertMatch: false
  grepOnlyMatching: false
  grepRoutineColumnInclusionRule: null
  hideEmptyTables: false
  parentTableFilterDepth: 0
  routineColumnInclusionRule: IncludeAll
  routineInclusionRule: ExcludeAll
  routineTypes: [procedure, function]
  schemaInclusionRule: IncludeAll
  schemaInfoLevel: standard
  sequenceInclusionRule: RegularExpressionRule@77afea7d-include//-exclude/.*/]
  synonymInclusionRule: RegularExpressionRule@161cd475-include//-exclude/.*/]
  tableInclusionRule: IncludeAll
  tableNamePattern: null
  tableTypes: [BASE TABLE, TABLE, VIEW]
  title: 
]
Mar 23, 2016 1:08:45 PM schemacrawler.tools.executable.BaseStagedExecutable execute
CONFIG: schemacrawler.tools.options.OutputOptions@532760d8[
  inputEncodingCharset: UTF-8
  inputResource: null
  outputEncodingCharset: UTF-8
  outputFormatValue: html
  outputResource: /Users/Brandon/Dropbox/Projects/Pilcrow/Drank/Code/SQL/schemacrawler/lint.html
]
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Database does not support catalogs
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Database supports schemas
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Database identifier quote string is """
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Supported table types are [temporary sequence, system toast table, foreign table, system toast index, system view, view, temporary table, table, system index, system table, sequence, index, temporary index, materialized view, temporary view, type]
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlSchemas
INFO: Crawling schemas
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaRetriever retrieveAllSchemas
INFO: Retrieving all schemas
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaRetriever retrieveAllCatalogs
INFO: Retrieving all catalogs
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveAllSchemas" results had 4 rows
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlSchemas
INFO: Total time taken for "crawlSchemas" - 00:00:00.013 hours
-  0.0% - 00:00:00.013 - "retrieveSchemas"
-  0.0% - 00:00:00.000 - "sortAndFilterSchemas"

Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Crawling SchemaCrawler information
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving database information
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler lambda$crawlDatabaseInfo$4
INFO: Not retrieving additional database information, since this was not requested
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving JDBC driver information
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler lambda$crawlDatabaseInfo$6
INFO: Not retrieving additional JDBC driver information, since this was not requested
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving SchemaCrawler crawl information
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Total time taken for "crawlDatabaseInfo" - 00:00:00.013 hours
-  0.0% - 00:00:00.000 - "retrieveDatabaseInfo"
-  0.0% - 00:00:00.000 - "retrieveAdditionalDatabaseInfo"
-  0.0% - 00:00:00.000 - "retrieveJdbcDriverInfo"
-  0.0% - 00:00:00.000 - "retrieveAdditionalJdbcDriverInfo"
-  0.0% - 00:00:00.013 - "retrieveCrawlHeaderInfo"

Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlColumnDataTypes
INFO: Crawling column data types
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler lambda$crawlColumnDataTypes$1
INFO: Retrieving system column data types
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler lambda$crawlColumnDataTypes$2
INFO: Not retrieving user column data types, since this was not requested
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlColumnDataTypes
INFO: Total time taken for "crawlColumnDataTypes" - 00:00:00.187 hours
-  0.0% - 00:00:00.187 - "retrieveSystemColumnDataTypes"
-  0.0% - 00:00:00.000 - "retrieveUserDefinedColumnDataTypes"

Mar 23, 2016 1:08:45 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Crawling tables
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveTables" results had 1 rows
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveTables" results had 1 rows
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveTables" results had 37 rows
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.TableColumnRetriever retrieveColumns
INFO: Retrieving table columns
Mar 23, 2016 1:08:45 PM schemacrawler.crawl.ForeignKeyRetriever retrieveForeignKeys
INFO: Retrieving foreign keys, using database metadata
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.SchemaCrawler lambda$crawlTables$22
INFO: Retrieving primary keys and indexes
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.IndexRetriever retrieveIndexes
INFO: Retrieving indexes, using database metadata
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Total time taken for "crawlTables" - 00:00:01.088 hours
-  0.0% - 00:00:00.016 - "retrieveTables"
-  0.0% - 00:00:00.122 - "retrieveColumns"
-  0.0% - 00:00:00.704 - "retrieveForeignKeys"
-  0.0% - 00:00:00.026 - "filterAndSortTables"
-  0.0% - 00:00:00.220 - "retrieveIndexes"
-  0.0% - 00:00:00.000 - "retrieveTableConstraintInformation"
-  0.0% - 00:00:00.000 - "retrieveTriggerInformation"
-  0.0% - 00:00:00.000 - "retrieveViewInformation"
-  0.0% - 00:00:00.000 - "retrieveTableDefinitions"
-  0.0% - 00:00:00.000 - "retrieveIndexInformation"
-  0.0% - 00:00:00.000 - "retrieveAdditionalTableAttributes"
-  0.0% - 00:00:00.000 - "retrieveTablePrivileges"
-  0.0% - 00:00:00.000 - "retrieveAdditionalColumnAttributes"
-  0.0% - 00:00:00.000 - "retrieveTableColumnPrivileges"

Mar 23, 2016 1:08:46 PM schemacrawler.crawl.SchemaCrawler crawlRoutines
INFO: Crawling routines
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Not retrieving procedures, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Not retrieving functions, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Not retrieving procedures, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Not retrieving functions, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Not retrieving procedures, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Not retrieving functions, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.SchemaCrawler lambda$crawlRoutines$9
INFO: Retrieving routine columns
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.SchemaCrawler crawlRoutines
INFO: Total time taken for "crawlRoutines" - 00:00:00.004 hours
-  0.0% - 00:00:00.002 - "retrieveRoutines"
-  0.0% - 00:00:00.000 - "retrieveRoutineColumns"
-  0.0% - 00:00:00.002 - "filterRoutines"
-  0.0% - 00:00:00.000 - "retrieveRoutineInformation"

Mar 23, 2016 1:08:46 PM schemacrawler.crawl.SchemaCrawler crawlSynonyms
INFO: Not retrieving synonyms, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.crawl.SchemaCrawler crawlSequences
INFO: Not retrieving sequences, since this was not requested
Mar 23, 2016 1:08:46 PM schemacrawler.tools.executable.SchemaCrawlerExecutable executeOn
INFO: Executing as a query, lint
Name is null
Re-run SchemaCrawler with the
-?
option for help
Or, re-run SchemaCrawler with an additional
-loglevel=CONFIG
option for details on the error
Mar 23, 2016 1:08:46 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: SchemaCrawler, v14.06.05
Mar 23, 2016 1:08:46 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: Command line: 
-server=postgresql
-database=drank
-user=postgres
-password=*****
-infolevel=standard
-routines=
-command=lint
-linterconfigs=lint.xml
-outputformat=html
-outputfile=lint.html
-loglevel=CONFIG
-linterconfigs=lint.xml
-outputformat=html
-outputfile=lint.html
Mar 23, 2016 1:08:46 PM us.fatehi.commandlineparser.CommandLineUtility logFullStackTrace
SEVERE: Name is null
java.lang.NullPointerException: Name is null
    at java.lang.Enum.valueOf(Enum.java:236)
    at schemacrawler.tools.options.TextOutputFormat.valueOf(TextOutputFormat.java:27)
    at schemacrawler.tools.options.TextOutputFormat.fromFormat(TextOutputFormat.java:42)
    at schemacrawler.tools.text.operation.OperationExecutable.getDataTraversalHandler(OperationExecutable.java:141)
    at schemacrawler.tools.text.operation.OperationExecutable.executeOn(OperationExecutable.java:72)
    at schemacrawler.tools.executable.SchemaCrawlerExecutable.executeOn(SchemaCrawlerExecutable.java:91)
    at schemacrawler.tools.executable.BaseStagedExecutable.execute(BaseStagedExecutable.java:86)
    at schemacrawler.tools.commandline.SchemaCrawlerCommandLine.execute(SchemaCrawlerCommandLine.java:125)
    at schemacrawler.Main.main(Main.java:80)

The default linter severity should be used when severity is not defined in linter config file

Hi Sualeh,

When we use xml config file without specifying the linter severity, MEDIUM severity is used instead of the default linter severity.

Ex:

public class LinterTableWithNoPrimaryKey extends BaseLinter {
    public LinterTableWithNoPrimaryKey(){
        setSeverity(LintSeverity.high);
    }
 ...
}

Here default is high.

If I use it in config file without specify the severity

<linter id="io.github.mbarre.schemacrawler.tool.linter.LinterTableWithNoPrimaryKey">
        <run>true</run>
        <table-exclusion-pattern><![CDATA[.*databasechangelog*]]></table-exclusion-pattern>
    </linter>

It will be report as lint with MEDIUM severity instead of HIGH.

It would be nice if the default linter severity were used when not defined in config file.

What do you think ?

SQL Server using -schemas is not finding any tables?

I have a sql server instance with a schema named 'foo'.

The user I'm using to connect has a default schema set to 'foo' and has grants set up which restricts its access to the foo schema only.

I'm trying to generate a graphviz diagram of all the tables contained in that schema:

./schemacrawler.sh -server=sqlserver -host=localhost -database=C -schemas=C.foo -user=FooUser -password=FooPassword -infolevel=maximum -command=graph -outputformat=pdf -outputfile=schema.pdf

This ends up empty... am I doing something obviously wrong here?

Website version not exact

The website is showing :

    Version: 14.07.06 | Last Published: 2016-03-29 

whereas the right vetsion is 14.07.07

LintDispatch > Build fails with no critical lint

Hi Sualeh,

I think we've just found a bug with lintDispatch feature.

The build result is SUCCESS only if there is no lint detected. Even low lint makes the build to fail.

Summary of schema lints:
   [low]*     1- should have remarks

mars 21, 2016 11:23:49 AM schemacrawler.tools.lint.LintDispatch$4 dispatch
GRAVE: Too many schema lints were found
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.210 s
[INFO] Finished at: 2016-03-21T11:23:49+11:00
[INFO] Final Memory: 9M/150M
[INFO] ------------------------------------------------------------------------

I created a test case for you.
Here the maven project to exec the build, with our config file and sql script for database creation :
test_dispatch.zip

Let me know if I did something wrong or if it's really a bug.
Thanks.

Unsupported major.minor version 52.0

I just downloaded the last package and tried to start that under Lubuntu 14.04.3 LTS.

This is the result.

$ ./schemacrawler.sh
Exception in thread "main" java.lang.UnsupportedClassVersionError: schemacrawler/Main : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)

SqlServer connection pattert

Hello, Sualeh

I try to inspect MySQL database using connection like "jdbc:sqlserver:". However, SchemaCrawler resolves SqlServerDatabaseConnector if connection starts with "jdbc:jtds:sqlserver:". Is it possible to change connectionUrlPattern from "jdbc:jtds:sqlserver:." to "jdbc:(jtds:)?sqlserver:." ?.

Thank you

Release binaries straight to gh from travis

Hi Sualeh,

i've just tested this : https://docs.travis-ci.com/user/deployment/releases
on the debian installer of schemacrawler. It is just great !
You push a tag, and from travis, you upload any produced file to the github release. I'm pretty sure that if you take a look at it you will ❤️ it !
Have a nice week end.

Personally, i will remove bintray uploads and let travis and github do the whole stuff for now : the release process is much lighter like that.

Exception retrieving routine information

Mar 22, 2016 3:04:27 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: SchemaCrawler, v14.06.05
Mar 22, 2016 3:04:27 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: Command line: 
-g=./schemacrawler.config.properties
-infolevel=standard
-server=postgresql
-database=drank
-host=::1
-user=postgres
-password=*****
-command=graph
-outputformat=pdf
-outputfile=diagram.pdf
-tabletypes=table
-tables=^(?!.*spatial_ref_sys).*
-Gdpi=300
-loglevel=CONFIG
Mar 22, 2016 3:04:27 PM us.fatehi.commandlineparser.CommandLineUtility logSystemProperties
CONFIG: System properties: 
java.awt.graphicsenv=sun.awt.CGraphicsEnvironment
java.awt.printerjob=sun.lwawt.macosx.CPrinterJob
java.class.version=52.0
java.endorsed.dirs=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/endorsed
java.ext.dirs=/Users/Brandon/Library/Java/Extensions:/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/ext:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre
java.io.tmpdir=/var/folders/bl/bqwbhwg939b1n19n4znq53f40000gn/T/
java.runtime.name=Java(TM) SE Runtime Environment
java.runtime.version=1.8.0_25-b17
java.specification.name=Java Platform API Specification
java.specification.vendor=Oracle Corporation
java.specification.version=1.8
java.vendor=Oracle Corporation
java.vendor.url=http://java.oracle.com/
java.vendor.url.bug=http://bugreport.sun.com/bugreport/
java.version=1.8.0_25
java.vm.info=mixed mode
java.vm.name=Java HotSpot(TM) 64-Bit Server VM
java.vm.specification.name=Java Virtual Machine Specification
java.vm.specification.vendor=Oracle Corporation
java.vm.specification.version=1.8
java.vm.vendor=Oracle Corporation
java.vm.version=25.25-b02
os.arch=x86_64
os.name=Mac OS X
os.version=10.11.4
Mar 22, 2016 3:04:27 PM us.fatehi.commandlineparser.CommandLineUtility logSystemProperties
CONFIG: Classpath: 
./_schemacrawler/lib/h2-1.4.191.jar
./_schemacrawler/lib/hsqldb-2.3.3.jar
./_schemacrawler/lib/jtds-1.3.1.jar
./_schemacrawler/lib/mariadb-java-client-1.3.5.jar
./_schemacrawler/lib/mysql-connector-java-5.1.38.jar
./_schemacrawler/lib/postgresql-9.4.1207.jar
./_schemacrawler/lib/schemacrawler-14.06.05.jar
./_schemacrawler/lib/schemacrawler-api-14.06.05-tests.jar
./_schemacrawler/lib/schemacrawler-db2-14.06.05.jar
./_schemacrawler/lib/schemacrawler-h2-14.06.05.jar
./_schemacrawler/lib/schemacrawler-hsqldb-14.06.05.jar
./_schemacrawler/lib/schemacrawler-lint-14.06.05.jar
./_schemacrawler/lib/schemacrawler-mariadb-14.06.05.jar
./_schemacrawler/lib/schemacrawler-mysql-14.06.05.jar
./_schemacrawler/lib/schemacrawler-offline-14.06.05.jar
./_schemacrawler/lib/schemacrawler-oracle-14.06.05.jar
./_schemacrawler/lib/schemacrawler-postgresql-14.06.05.jar
./_schemacrawler/lib/schemacrawler-sqlite-14.06.05.jar
./_schemacrawler/lib/schemacrawler-sqlserver-14.06.05.jar
./_schemacrawler/lib/schemacrawler-sybaseiq-14.06.05.jar
./_schemacrawler/lib/sqlite-jdbc-3.7.8.jar
./_schemacrawler/lib/xmlpull-1.1.3.1.jar
./_schemacrawler/lib/xpp3_min-1.1.4c.jar
./_schemacrawler/lib/xstream-1.4.8.jar
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, db2=schemacrawler.server.db2.DB2DatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, h2=schemacrawler.server.h2.H2DatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, hsqldb=schemacrawler.server.hsqldb.HyperSQLDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, mariadb=schemacrawler.server.mariadb.MariaDBDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, mysql=schemacrawler.server.mysql.MySQLDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, offline=schemacrawler.tools.offline.OfflineDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, oracle=schemacrawler.server.oracle.OracleDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, postgresql=schemacrawler.server.postgresql.PostgreSQLDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, sqlite=schemacrawler.tools.sqlite.SQLiteDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, sqlserver=schemacrawler.server.sqlserver.SqlServerDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry loadDatabaseConnectorRegistry
CONFIG: Loading database connector, sybaseiq=schemacrawler.server.sybaseiq.SybaseIQDatabaseConnector
Mar 22, 2016 3:04:27 PM schemacrawler.tools.databaseconnector.DatabaseConnectorRegistry logRegisteredJdbcDrivers
CONFIG: Registered JDBC drivers, [com.mysql.fabric.jdbc.FabricMySQLDriver 5.1, com.mysql.jdbc.Driver 5.1, net.sourceforge.jtds.jdbc.Driver 1.3, org.h2.Driver 1.4, org.hsqldb.jdbc.JDBCDriver 2.3, org.mariadb.jdbc.Driver 1.3, org.postgresql.Driver 9.4, org.sqlite.JDBC 3.7, schemacrawler.tools.offline.jdbc.OfflineDriver 0.0]
Mar 22, 2016 3:04:27 PM schemacrawler.tools.commandline.SchemaCrawlerCommandLine <init>
INFO: Using database plugin, postgresql - PostgreSQL
Mar 22, 2016 3:04:27 PM schemacrawler.schemacrawler.Config loadProperties
INFO: Loading properties from file, /Users/Brandon/Dropbox/Projects/Pilcrow/Drank/Code/SQL/schemacrawler/schemacrawler.config.properties
Mar 22, 2016 3:04:27 PM schemacrawler.schemacrawler.Config loadProperties
CONFIG: Cannot load properties from file, /Users/Brandon/Dropbox/Projects/Pilcrow/Drank/Code/SQL/schemacrawler/schemacrawler.additional.config.properties
Mar 22, 2016 3:04:27 PM schemacrawler.tools.commandline.SchemaCrawlerOptionsParser getOptions
WARNING: Please provide a -schemas option for efficient retrieval of database metadata
Mar 22, 2016 3:04:27 PM schemacrawler.tools.commandline.SchemaCrawlerOptionsParser logOverride
INFO: Overriding tables inclusion rule from command-line to RegularExpressionRule@512ddf17-include/^(?!.*spatial_ref_sys).*/-exclude//]
Mar 22, 2016 3:04:27 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions getConnection
WARNING: Database password is not provided
Mar 22, 2016 3:04:27 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions getConnection
INFO: Making connection to jdbc:postgresql://::1:5432/drank?ApplicationName=SchemaCrawler
for user 'postgres', with properties {}
Mar 22, 2016 3:04:27 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions getConnection
INFO: Opened database connection, org.postgresql.jdbc.PgConnection@34ce8af7
Mar 22, 2016 3:04:27 PM schemacrawler.schemacrawler.BaseDatabaseConnectionOptions logConnection
INFO: Connected to 
PostgreSQL 9.5.1 
using JDBC driver 
PostgreSQL Native Driver PostgreSQL 9.4.1207
Mar 22, 2016 3:04:27 PM schemacrawler.tools.executable.BaseStagedExecutable execute
INFO: Executing SchemaCrawler command, "graph"
Mar 22, 2016 3:04:27 PM schemacrawler.tools.executable.BaseStagedExecutable execute
CONFIG: schemacrawler.schemacrawler.SchemaCrawlerOptions@36d64342[
  childTableFilterDepth: 0
  columnInclusionRule: RegularExpressionRule@17c68925-include/.*/-exclude//]
  grepColumnInclusionRule: null
  grepDefinitionInclusionRule: null
  grepInvertMatch: false
  grepOnlyMatching: false
  grepRoutineColumnInclusionRule: null
  hideEmptyTables: false
  parentTableFilterDepth: 0
  routineColumnInclusionRule: IncludeAll
  routineInclusionRule: IncludeAll
  routineTypes: [procedure, function]
  schemaInclusionRule: IncludeAll
  schemaInfoLevel: standard
  sequenceInclusionRule: RegularExpressionRule@6fadae5d-include//-exclude/.*/]
  synonymInclusionRule: RegularExpressionRule@17f6480-include//-exclude/.*/]
  tableInclusionRule: RegularExpressionRule@512ddf17-include/^(?!.*spatial_ref_sys).*/-exclude//]
  tableNamePattern: null
  tableTypes: [table]
  title: 
]
Mar 22, 2016 3:04:27 PM schemacrawler.tools.executable.BaseStagedExecutable execute
CONFIG: schemacrawler.tools.options.OutputOptions@506e6d5e[
  inputEncodingCharset: UTF-8
  inputResource: null
  outputEncodingCharset: UTF-8
  outputFormatValue: pdf
  outputResource: /Users/Brandon/Dropbox/Projects/Pilcrow/Drank/Code/SQL/schemacrawler/diagram.pdf
]
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Database does not support catalogs
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Database supports schemas
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Database identifier quote string is """
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.RetrieverConnection <init>
CONFIG: Supported table types are [temporary sequence, system toast table, foreign table, system toast index, system view, view, temporary table, table, system index, system table, sequence, index, temporary index, materialized view, temporary view, type]
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlSchemas
INFO: Crawling schemas
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaRetriever retrieveAllSchemas
INFO: Retrieving all schemas
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaRetriever retrieveAllCatalogs
INFO: Retrieving all catalogs
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveAllSchemas" results had 4 rows
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlSchemas
INFO: Total time taken for "crawlSchemas" - 00:00:00.008 hours
-  0.0% - 00:00:00.008 - "retrieveSchemas"
-  0.0% - 00:00:00.000 - "sortAndFilterSchemas"

Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Crawling SchemaCrawler information
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving database information
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler lambda$crawlDatabaseInfo$4
INFO: Not retrieving additional database information, since this was not requested
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving JDBC driver information
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler lambda$crawlDatabaseInfo$6
INFO: Not retrieving additional JDBC driver information, since this was not requested
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Retrieving SchemaCrawler crawl information
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlDatabaseInfo
INFO: Total time taken for "crawlDatabaseInfo" - 00:00:00.016 hours
-  0.0% - 00:00:00.000 - "retrieveDatabaseInfo"
-  0.0% - 00:00:00.000 - "retrieveAdditionalDatabaseInfo"
-  0.0% - 00:00:00.000 - "retrieveJdbcDriverInfo"
-  0.0% - 00:00:00.000 - "retrieveAdditionalJdbcDriverInfo"
-  0.0% - 00:00:00.016 - "retrieveCrawlHeaderInfo"

Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlColumnDataTypes
INFO: Crawling column data types
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler lambda$crawlColumnDataTypes$1
INFO: Retrieving system column data types
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler lambda$crawlColumnDataTypes$2
INFO: Not retrieving user column data types, since this was not requested
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlColumnDataTypes
INFO: Total time taken for "crawlColumnDataTypes" - 00:00:00.190 hours
-  0.0% - 00:00:00.190 - "retrieveSystemColumnDataTypes"
-  0.0% - 00:00:00.000 - "retrieveUserDefinedColumnDataTypes"

Mar 22, 2016 3:04:27 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Crawling tables
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveTables" results had 1 rows
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveTables" results had 1 rows
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.TableRetriever retrieveTables
INFO: Retrieving tables
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveTables" results had 33 rows
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.TableColumnRetriever retrieveColumns
INFO: Retrieving table columns
Mar 22, 2016 3:04:27 PM schemacrawler.crawl.ForeignKeyRetriever retrieveForeignKeys
INFO: Retrieving foreign keys, using database metadata
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.SchemaCrawler lambda$crawlTables$22
INFO: Retrieving primary keys and indexes
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.IndexRetriever retrieveIndexes
INFO: Retrieving indexes, using database metadata
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.SchemaCrawler crawlTables
INFO: Total time taken for "crawlTables" - 00:00:01.019 hours
-  0.0% - 00:00:00.015 - "retrieveTables"
-  0.0% - 00:00:00.100 - "retrieveColumns"
-  0.0% - 00:00:00.679 - "retrieveForeignKeys"
-  0.0% - 00:00:00.018 - "filterAndSortTables"
-  0.0% - 00:00:00.206 - "retrieveIndexes"
-  0.0% - 00:00:00.000 - "retrieveTableConstraintInformation"
-  0.0% - 00:00:00.000 - "retrieveTriggerInformation"
-  0.0% - 00:00:00.000 - "retrieveViewInformation"
-  0.0% - 00:00:00.000 - "retrieveTableDefinitions"
-  0.0% - 00:00:00.001 - "retrieveIndexInformation"
-  0.0% - 00:00:00.000 - "retrieveAdditionalTableAttributes"
-  0.0% - 00:00:00.000 - "retrieveTablePrivileges"
-  0.0% - 00:00:00.000 - "retrieveAdditionalColumnAttributes"
-  0.0% - 00:00:00.000 - "retrieveTableColumnPrivileges"

Mar 22, 2016 3:04:28 PM schemacrawler.crawl.SchemaCrawler crawlRoutines
INFO: Crawling routines
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, information_schema
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveProcedures" results had 13 rows
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, information_schema
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, pg_catalog
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveProcedures" results had 2810 rows
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, pg_catalog
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.RoutineRetriever retrieveProcedures
INFO: Retrieving procedures for, public
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.MetadataResultSet close
INFO: "retrieveProcedures" results had 1186 rows
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.RoutineRetriever retrieveFunctions
INFO: Retrieving functions, public
Mar 22, 2016 3:04:28 PM schemacrawler.crawl.SchemaCrawler lambda$crawlRoutines$9
INFO: Retrieving routine columns
Exception retrieving routine information: null
Re-run SchemaCrawler with the
-?
option for help
Or, re-run SchemaCrawler with an additional
-loglevel=CONFIG
option for details on the error
Mar 22, 2016 3:04:31 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: SchemaCrawler, v14.06.05
Mar 22, 2016 3:04:31 PM us.fatehi.commandlineparser.CommandLineUtility logSafeArguments
INFO: Command line: 
-g=./schemacrawler.config.properties
-infolevel=standard
-server=postgresql
-database=drank
-host=::1
-user=postgres
-password=*****
-command=graph
-outputformat=pdf
-outputfile=diagram.pdf
-tabletypes=table
-tables=^(?!.*spatial_ref_sys).*
-Gdpi=300
-loglevel=CONFIG
Mar 22, 2016 3:04:31 PM us.fatehi.commandlineparser.CommandLineUtility logFullStackTrace
SEVERE: Exception retrieving routine information: null
schemacrawler.schemacrawler.SchemaCrawlerException: Exception retrieving routine information: null
    at schemacrawler.crawl.SchemaCrawler.crawlRoutines(SchemaCrawler.java:346)
    at schemacrawler.crawl.SchemaCrawler.crawl(SchemaCrawler.java:759)
    at schemacrawler.tools.executable.BaseStagedExecutable.execute(BaseStagedExecutable.java:84)
    at schemacrawler.tools.commandline.SchemaCrawlerCommandLine.execute(SchemaCrawlerCommandLine.java:125)
    at schemacrawler.Main.main(Main.java:80)
Caused by: java.lang.NullPointerException
    at org.postgresql.jdbc.TypeInfoCache.getSQLType(TypeInfoCache.java:185)
    at org.postgresql.jdbc.TypeInfoCache.getSQLType(TypeInfoCache.java:180)
    at org.postgresql.jdbc.PgDatabaseMetaData.getProcedureColumns(PgDatabaseMetaData.java:1358)
    at org.postgresql.jdbc.PgDatabaseMetaData.getProcedureColumns(PgDatabaseMetaData.java:1146)
    at schemacrawler.crawl.RoutineRetriever.retrieveProcedureColumns(RoutineRetriever.java:252)
    at schemacrawler.crawl.SchemaCrawler.lambda$crawlRoutines$9(SchemaCrawler.java:298)
    at schemacrawler.crawl.SchemaCrawler$$Lambda$47/1468303011.call(Unknown Source)
    at sf.util.StopWatch.time(StopWatch.java:129)
    at schemacrawler.crawl.SchemaCrawler.crawlRoutines(SchemaCrawler.java:288)
    ... 4 more

This worked perfectly before but I have since made many changes to the schema.

Website version not exact

The website is showing :

    Version: 14.07.04 | Last Published: 2016-03-16 

whereas the right vetsion is 14.07.05

Slow performance while retrieving foreign keys

Hi,

SchemaCrawler seems to have a performance issue when retrieving foreign keys.

On our databases (oracle 11g), schemacrawler needs more than two hours to dump all tables on a very small database (650 MB in total; however many database objects exist):

select sum(BYTES/1024/1024) as TOTAL_MB from user_segments;
--returns 655 

select count(1) from all_cons_columns;
--returns 8633

select count(1) from all_constraints;
--returns 8541

select count(1) from tab;
--returns 1487

The following stacktrace shows where the most time is spent:

  java.lang.Thread.State: RUNNABLE
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:152)
        at java.net.SocketInputStream.read(SocketInputStream.java:122)
        at oracle.net.ns.Packet.receive(Packet.java:308)
        at oracle.net.ns.DataPacket.receive(DataPacket.java:106)
        at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:324)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:268)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:190)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:107)
        at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:124)
        at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:80)
        at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1137)
        at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:350)
        at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
        at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
        at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:208)
        at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:886)
        at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
        at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
        at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3613)
        at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3657)
        - locked <0x0000000706fbe278> (a oracle.jdbc.driver.T4CConnection)
        at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1495)
        at oracle.jdbc.OracleDatabaseMetaData.keys_query(OracleDatabaseMetaData.java:3731)
        at oracle.jdbc.OracleDatabaseMetaData.getExportedKeys(OracleDatabaseMetaData.java:3880)
        at schemacrawler.crawl.TableRetriever.retrieveForeignKeys(TableRetriever.java:173)
        at schemacrawler.crawl.SchemaCrawler.crawlTables(SchemaCrawler.java:443)
        at schemacrawler.crawl.SchemaCrawler.crawl(SchemaCrawler.java:553)
        at schemacrawler.tools.executable.BaseStagedExecutable.execute(BaseStagedExecutable.java:69)
        at schemacrawler.tools.commandline.SchemaCrawlerCommandLine.execute(SchemaCrawlerCommandLine.java:121)
        at schemacrawler.Main.main(Main.java:87)

The cause seems to be the "select all foreign keys"-query, which in my case takes 5 seconds to return.
Because ~1500 tables need to be crawled, the select seems to be executed 1500 times so this will take approx. 1500 * 5 seconds = ~2 hours to complete.

A possible solution would be executing the "select all foreign keys"-query only once for all tables - not for each table.

trigger information retreive in PostGres Java API

Hello,
there is an issue retreiving trigger information on PostGres 9.4 in using the API :
In fact the column name must be ACTION_TIMING in the select query instead of CONDITION_TIMING. Driver verion is the last available PostGres Java driver. Doesn't work too with the included driver in schemacrawler packages.

Here is the stack error :

PM sf.util.DatabaseUtility executeSql
AVERTISSEMENT: Error executing: SELECT
NULL AS TRIGGER_CATALOG,
TRIGGER_SCHEMA,
TRIGGER_NAME,
EVENT_MANIPULATION,
EVENT_OBJECT_CATALOG,
EVENT_OBJECT_SCHEMA,
EVENT_OBJECT_TABLE,
ACTION_ORDER,
ACTION_CONDITION,
ACTION_STATEMENT,
ACTION_ORIENTATION,
CONDITION_TIMING,
CREATED
FROM
INFORMATION_SCHEMA.TRIGGERS

org.postgresql.util.PSQLException: ERREUR: la colonne � condition_timing � n'existe pas
Position : 252
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:173)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:645)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:481)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:473)
at sf.util.DatabaseUtility.executeSql(DatabaseUtility.java:118)
at schemacrawler.crawl.TableExtRetriever.retrieveTriggerInformation(TableExtRetriever.java:450)
at schemacrawler.crawl.SchemaCrawler.lambda$crawlTables$23(SchemaCrawler.java:615)
at sf.util.StopWatch.time(StopWatch.java:129)
at schemacrawler.crawl.SchemaCrawler.crawlTables(SchemaCrawler.java:612)
at schemacrawler.crawl.SchemaCrawler.crawl(SchemaCrawler.java:742)
at schemacrawler.utility.SchemaCrawlerUtility.getCatalog(SchemaCrawlerUtility.java:60)

Custome lint outputFileFormat

Description

We are using more and more lints on https://github.com/mbarre/schemacrawler-additionnallints and we are begining to have some new needs around lint reporting.

Concept

On the same philosophy than schemacrawler plugins, we'd like to extend custome lint reports and make for example custom html reports. There already is one, but we' d like to be able to create custom ones, for example, by adding charts, scores, and hence, be able to get a real dashboard on our lints.

Does this extension feature already exist ? Can we extend a class, and for example, embed this output format in our custom lint or any other project ?

Thank you in advance for your feedback. We find this idea very exciting by this idea which is linked to #30.
With that, we could implement a kind of "sonarq" for database schema !...which would be pretty cool ;-p

Dependencies update

I'm asking this so i can flag some of them as non upgradable in the dependencies, see https://www.versioneye.com/user/projects/56b687660a0ff5002c860389

Thank you in advance for your answser.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.