Chapter 12. Deployment Guide

Fred Toussi

The HSQL Development Group

$Revision: 6692 $

Copyright 2002-2024 Fred Toussi. Permission is granted to distribute this document without any alteration under the terms of the HSQLDB license. Additional permission is granted to the HSQL Development Group to distribute this document with or without alterations under the terms of the HSQLDB license.

2024-06-01

Table of Contents

Memory and Disk Use
Table Memory Allocation
Data Cache Memory Allocation
Result Set Memory Allocation
Temporary Memory Use During Operations
Object Pool Memory Allocation
Lob Memory Usage
Using NIO File Access
Disk Space Use
Using HyperSQL Without Logging Data Change
Bulk Inserts, Updates and Deletes
Managing Database Connections
Application Development and Testing
Tweaking the Mode of Operation
Embedded Databases in Desktop Applications
Embedded Databases in Server Applications
Mixed Mode : Embedding a HyperSQL Server (Listener)
Server Databases
Upgrading Databases
Manual Changes to the *.script File
Backward Compatibility Issues
HyperSQL Dependency Settings for Applications
What version to Pull
Range Versioning

Memory and Disk Use

Memory used by the program can be thought of as two distinct pools: memory used for table data, which is not released unless the data is deleted; and memory that is released automatically, which includes memory used for caching, building result sets and other internal operations such as storing the information needed for rolling back a transaction.

Most JVM implementations allocate up to a maximum amount of memory (usually 64 MB by default). This amount is generally not adequate when large memory tables are used, or when the average size of rows in cached tables is larger than a few thousand bytes. The maximum amount of allocated memory can be set on the Java command line that is used for running HyperSQL. For example, the JVM parameter -Xmx256m increases the amount to 256 MB.

Table Memory Allocation

The memory used for a MEMORY table is the sum of memory used by each row. Each MEMORY table row is a Java object that contains an array of objects for the fields in the row. Each field is an object such as Integer, Long, String, etc. In addition, each index on the table adds a node object. As a result, a table with just one column of type INTEGER will have four objects per row taking up 80 bytes per row. Beyond this, each extra column in the table adds at least a few bytes to the size of each row. TEXT tables use less memory than MEMORY tables, as the array of objects is read from the disk only when needed. CACHED tables use even less memory, as discussed below.

Data Cache Memory Allocation

With CACHED tables, the data is stored on disk and only up to a maximum number of rows are held in the memory cache at any time. These cached rows use the same amount of memory per row as MEMORY tables. A single memory cache is used for all CACHED tables.

The default maximum number of rows in the memory cache is 50,000 rows. As any random subset of the rows in any of the CACHED tables can be held in the cache, the amount of memory needed for cached rows can reach the sum of the rows containing the largest field data. For example if a table with 100,000 rows contains 40,000 rows with 1,000 bytes of data in each row and 60,000 rows with 100 bytes in each, the cache can grow to contain 50,000 of the smaller rows, but as explained below, only 10,000 or the large rows. The SET FILES CACHE ROWS command or the hsqldb.cache_rows connection property can be used to alter this amount.

The default maximum total size of the rows in the cache is 10,000KB. This is the size of binary images of the rows and indexes. It translates to more actual memory, typically 2-4 times, used for the cache because the data is represented by Java objects. The SET FILES CACHE SIZE command of the hsqldb.cache_size connection property can be used to alter this amount.

If memory is limited, the hsqldb.cache_size or hsqldb.cache_rows settings can be reduced. In the example above, if the hsqldb.cache_size is reduced from 10,000 to 5,000, it will allow the number of cached rows to reach 50,000 small rows, but only 5,000 of the larger rows.

Data for CLOB and BLOB columns is not cached and does not affect the CACHED table memory cache.

The operating system usually allocates a large amount of buffer memory for speed up file read operations. Therefore, when a lot of memory is available to the operating system, all database operations perform faster.

Result Set Memory Allocation

By default, all the rows in the result set are built in memory, therefore very large result sets may not be possible to build with limited memory. An in-process database releases the memory when the application program closes the java.sql.ResultSet object. A server mode database releases the result set from the server memory once the database server has returned the result set. A server requires additional memory for returning result sets, as it converts the full result set into an array of bytes which is then transmitted to the client.

In a server mode database, when the setFetchSize() method of the java.sql.Statement interface is used to limit the number of rows fetched, the whole result is held by the engine and is returned to the JDBC ResultSet in blocks of rows of the specified fetch size.

HyperSQL 2 supports disk-based result sets. The commands, SET SESSION RESULT MEMORY ROWS <integer> and SET DATABASE DEFAULT RESULT MEMORY ROWS <integer> specify a threshold for the number of rows. Results with row counts above the threshold are stored on disk. These settings also apply to temporary tables, views and subquery tables.

Disk-based result sets slow down the database operations and should be used only when absolutely necessary, perhaps with result sets that are larger than tens of thousands of rows.

Temporary Memory Use During Operations

When UPDATE and DELETE queries are performed on CACHED tables, the full set of rows that are affected, including those affected due to ON UPDATE actions, is held in memory for the duration of the operation. This means it may not be possible to perform deletes or updates involving very large numbers of rows of CACHED tables with limited memory. Such operations should be performed in smaller sets. This memory is released as soon as the DELETE or UPDATE is performed.

When transaction support is enabled with SET AUTOCOMMIT FALSE, lists of all insert, delete or update operations are stored in memory so that they can be undone when ROLLBACK is issued. Transactions that span thousands of modifications to data will take up a lot of memory until the next COMMIT or ROLLBACK clears the list. Each row modification uses less than 100 bytes until COMMIT.

When subqueries or views are used in SELECT and other statements, transient tables are created and populated by the engine. If the SET SESSION RESULT MEMORY ROWS <integer> statement has been used, these transient tables are stored on disk when they are larger than the threshold.

Object Pool Memory Allocation

HyperSQL uses a set of fast pools for immutable objects such as Integer, Long and short String objects that are stored in the database. In most circumstances, this reduces the memory footprint still further as fewer copies of the most frequently used objects are kept in memory. The object pools are shared among all databases in the JVM. The size of each pool can be modified only by altering and recompiling the org.hsqldb.store.ValuePool class.

Lob Memory Usage

Access to lobs is always performed in chunks, so it is perfectly possible to store and access a CLOB or BLOB that is larger than the JVM memory allocation. The actual total size of lobs is almost unlimited. We have tested with over 100 GB of lobs without any loss of performance.

By default, HyperSQL 2 uses memory-based tables for the lobs schema (not the actual lob data). Therefore, it is practical to store about 100,000 individual lobs in the database with the default JVM memory allocation. More lobs can be stored with larger JVM memory allocations. In order to store millions of lobs with limited memory allocation, you can change the lob schema storage to CACHED tables with the following statements:

Example 12.1. Using CACHED tables for the LOB schema

 SET TABLE SYSTEM_LOBS.BLOCKS TYPE CACHED
 SET TABLE SYSTEM_LOBS.LOBS TYPE CACHED
 SET TABLE SYSTEM_LOBS.LOB_IDS TYPE CACHED

Using NIO File Access

This method of file access uses the operating system's memory-mapped file buffer for the .data file. For larger databases with CACHED tables, use of nio improves database access speed significantly. Performance improvements can be tenfold or even higher. By default, NIO is used for .data files from 16 MB up to 256 MB. You can increase the limit with the SET FILES NIO SIZE <value> statement. There should be enough RAM available to accommodate the memory mapped buffers. The memory is not taken from the JVM memory allocation, therefore there is no need to increase the -Xmx parameter of the JVM. If not enough memory is available for the specified value, nio is not used.

Disk Space Use

With file: databases, the engine uses the disk for storage of data and any change. This includes: the .script file which is always present, the .log file which grows in size and is reset at regular intervals, the .data file when CACHED tables are used, and the .lobs file when CLOB or BLOB data is used. The .backup file is used for safely by the engine to store parts of the .data file that are modified internally during operation. Both the .log and .backup files are reset at each CHECKPOINT and a new copy of the .script file is written. Spare file space, larger than the total size of the .data and .script files, plus the maximum allowed size of the .log file, is needed. The .lobs file is not copied during database updates as it is not necessary for safety.

When the RESULT MEMORY ROWS setting is used to limit the memory rows in result sets and temporary tables, each session uses additional disk space for large results and temporary tables. These results are stored in files in the temp directory alongside the database files, which are deleted at database shutdown.

Using HyperSQL Without Logging Data Change

All file: databases (except read-only databases) write the changes to the .log file. There are scenarios where writing to the .log file can be turned off to improve performance, especially with larger databases used for temporary data. For these applications you can set the property hsqldb.log_data=false to disable the recovery log and speed up data change performance. The equivalent SQL command is SET FILES LOG FALSE. With this setting, no data is logged, but all the changes to cached tables are still written to the .data file. To persist all the data changes up to date, you can use the CHECKPOINT command. If you perform SHUTDOWN, the data is also persisted correctly. If you do not use CHECKPOINT or SHUTDOWN when you terminate the application, all the changes are lost and the database reverts to its original state when it is opened without losing any of the original data.

This is also a useful option on some platforms, such as embedded devices with SSD storage, as it reduces write operations on SSD devices. The lock file can also be disabled with the connection property hsqldb.lock_file=false.

Your server applications can use a database as a temporary disk data cache which is not persisted past the lifetime of the application. For this usage, delete the database files when the application ends.

Bulk Inserts, Updates and Deletes

Bulk inserts, deletes and updates are performed with the best performance with the following method. The database remains safe and consistent using this method. In the event of a machine crash during the operation, the database can be recovered to the point just before the bulk operation.

  1. Before the operation, execute the SET FILES LOG FALSE statement.

  2. Execute the CHECKPOINT statement.

  3. Perform all the bulk operations, using batched prepared statements. A batch size of 1000 to 10000 is adequate. Perform commit after each batch.

  4. After all the bulk operations are complete, execute the SET FILES LOG TRUE statement.

  5. Finally execute the CHECKPOINT statement.

  6. If you have performed many thousands of updates or deletes (not just inserts), it is a good idea to execute CHECKPOINT DEFRAG, instead of CHECKPOINT at the end.

  7. If things go wrong during the bulk operation, for example when a unique constraint violation aborts the operation, and you want to redo the whole operation, just use SHUTDOWN IMMEDIATELY instead of CHECKPOINT. When you restart the database it will revert to the state at the first CHECKPOINT and the bulk operation can be redone.

Managing Database Connections

In all running modes (server or in-process) multiple connections to the database engine are supported. in-process (standalone) mode supports connections from the client in the same Java Virtual Machine, while server modes support connections over the network from several different clients.

Connection pooling software can be used to connect to the database but it is not generally necessary. Connection pools may be used for the following reasons.

  • To allow new queries to be performed while a time-consuming query is being performed in the background. In HyperSQL, blocking depends on the transaction control model, the isolation level, and the current activity by other sessions.

  • To limit the maximum number of simultaneous connections to the database for performance reasons. With HSQLDB this can be useful if your application is designed in a way that opens and closes connections for each small task. Also, the overall performance may be higher when fewer simultaneous connections are used. If you want to reduce the number of simultaneous sessions, you can use a connection pool with fewer pooled connections.

An application that is not both multi-threaded and transactional, such as an application for recording user login and logout actions, does not need more than one connection. The connection can stay open indefinitely and reopened only when it is dropped due to network problems.

When using an in-process database, when the last connection to the database is closed, the database still remains open, waiting for the next connection to be made. From version 2.2.9, each time the last connection is closed all the data changes are logged and synched to disk.

An explicit SHUTDOWN command, with or without an argument, is required to close the database. A connection property, shutdown=true, can be used on the connection URL to shutdown the database when the last connection is closed.

When using a server database (and to some extent, an in-process database), care must be taken to avoid creating and dropping JDBC Connections too frequently. Failure to observe this will result in poor performance when the application is under heavy load.

A common error made by users in load-test simulations is to use a single client machine to open and close thousands of connections to a HyperSQL server instance. The connection attempts will fail after a few thousand because of OS restrictions on opening sockets and the delay that is built into the OS in closing them.

Application Development and Testing

The first thing to be aware of is the SQL conformance settings of HyperSQL. By default, HyperSQL version 2 applies stricter conformance rules than version 1.8 and catches long strings or decimal values that do not fit within the specified length or precision settings. However, there are several conformance settings that are turned off by default. This is to enable easier migration from earlier versions, and also greater compatibility with databases such as MySQL that are sometimes very liberal with type conversions. The conformance settings are listed in the System Management chapter and their connection property equivalents are listed in the Database Properties chapter. Ideally, all the settings that are not for syntax compatibility with other databases should have a true value for best error checking. You can turn on the settings for syntax compatibility with another database if you are porting or testing applications targeted at the other database.

For application unit testing you can use an all-in-memory, in-process database.

If the tests are all run in a single Java process, then the contents of a mem: database survives between tests. To release the contents, you can use the SHUTDOWN command (an SQL command). You can even use multiple mem: databases in your tests and SHUTDOWN each one separately.

If the tests are done in different Java processes and you want to keep the data between the tests, the best solution is to use a Server instance that has a mem: database. After the tests are done, you can SHUTDOWN this database, which will shutdown the server.

The Server has an option that allows databases to be created as needed by making a connection (see the Listeners Chapter). This option is useful for testing, as your server is never shut down when a database is shutdown. Each time you connect to the mem: database that is served by the Server, the database is created if it does not exist (i.e. has been previously shut down).

If you do not want to run a Server instance, and you need persistence between tests in different processes, then you should use a file: database. From version 2.2.9 when the last existing connection to the database is closed, the latest changes to the database are persisted fully with fsync. The database is still in an open state until it is shut down. You can use the shutdown=true connection property to close the database automatically after the connections are closed. The automatic sync and shutdown are mainly for test environment. In production environments you should execute the SHUTDOWN statement before your application is closed. This ensures a quick start next time you connect to the database.

An alternative option is to use hsqldb.write_delay=false connection property, but this is slightly slower than the other option and should be used in situations where the test application does not close the connections. This option uses fsync after each commit. Even if the test process is aborted without shutting down the connections, all committed data is saved. It has been reported that some data access frameworks do not close all their connection to the database after the tests. In such situations, you need to use this option if you want the data to persist at the end of the tests

You may actually want to use a file: database, or a server instance that serves a file: database in preference to a mem: database. As HyperSQL logs the DDL and DML statements in the .log file, this file can be used to check what is being sent to the database. Note that UPDATE statements are represented by a DELETE followed by an INSERT statement. Statements are written out when the connection commits. The write delay also has an effect on how soon the statements are written out. By default, the write delay is 0.5 second.

The SQL logging feature in version 2.2 and later records all executed statements and can be used for debugging your application.

Some types of tests start with a database that already contains the tables and data, and perform various operations on it during the tests. You can create and populate the initial database then set the property files_readonly=true in the .properties file of the database. The tests can then modify the database, but these modifications are not persisted after the tests have completed and the SHUTDOWN is executed.

Databases with files_readonly=true can be placed within the classpath and in a jar file. In this case, the connection URL must use the res: protocol, which treats the database as a resource.

Tweaking the Mode of Operation

Different modes of operation and settings are used for different purposes. Some scenarios are discussed below:

Embedded Databases in Desktop Applications

In this usage, the amount of data change is often limited and there is often a requirement to persist the data immediately. The default write delay of 0.5 second is fine for many applications. You can also use the property hsqldb.write_delay_millis=100 to reduce it to 0.1 second, or the property hsqldb.write_delay=false to force a disk fsync after each commit. Before the application is closed, you should perform the SHUTDOWN command to ensure the database is opened instantly when it is next opened. Note you don't need to use SHUTDOWN COMPACT as routine.

Embedded Databases in Server Applications

This usage involves a server application, such as a web application, connecting to an embedded HyperSQL instance. In this usage, the database is often accessed heavily, therefore performance and latency is a consideration. If the database is updated heavily, the default value of the WRITE DELAY property (0.5 sec) is often enough, as it is assumed the server or the application does not go down frequently. If it is necessary, you can reduce the WRITE DELAY to a small value (20 ms) without impacting the update speed. If you reduce WRITE DELAY to zero, performance drops to the speed of disk file sync operation.

Alternatively, a server application can use an all-in-memory database instance for fast access, while sending the data changes to a persistent, disk based instance either periodically or in real time.

Mixed Mode : Embedding a HyperSQL Server (Listener)

Since you won't be able to access in-process database instances from other processes, you will often want to run a Listener in your applications that use embedded databases. You can do this by starting up a Server or WebServer instance programatically, but you could also use the class org.hsqldb.util.MainInvoker to start up your application and a HyperSQL Server or WebServer without any programming. MainInvoker is a general-purpose utility class to invoke the main methods of multiple classes. Each main class is followed by its arguments (if any), then an empty string to separate it from the next main class.

Example 12.2. MainInvoker Example

  java -cp path/to/your/app.jar:path/to/hsqldb.jar org.hsqldb.util.MainInvoker org.hsqldb.server.Server "" com.your.main.App


(Use ; instead of : to delimit classpath elements on Windows). The empty string separates your com.your.main.App invocation from the org.hsqldb.server.

Specify the same in-process JDBC URL in your app and in the server.properties file. You can then connect to the database from outside using a JDBC URL like jdbc:hsqldb:hsql://hostname, while connecting from inside the application using something like jdbc:hsqldb:file:<filepath of database> .

This tactic can be used to run off-the-shelf server applications with an embedded HyperSQL Server, without doing any coding.

MainInvoker can be used to run any number of Java class main method invocations in a single JVM. See the API spec for org.hsqldb.util.MainInvoker for details on its usage.

Server Databases

Running databases in a HyperSQL server is the best overall method of access. As the JVM process is separate from the application, this method is the most reliable as well as the most accessible method of running databases.

Upgrading Databases

HSQLDB can open databases created with version 2.0 and above. It is a good idea to perform SHUTDOWN COMPACT after the upgrade to complete the upgrade.

Downgrading is also possible. A database created with the current version can be closed with SHUTDOWN SCRIPT before you open it with previous 2.x versions of HyperSQL.

To upgrade an old database created with version 1.8.x, you can use HSQLDB version 2.3 to 2.5 to open the database and perform SHUTDOWN SCRIPT. You can then open the database with version 2.6 or later.

If the 1.8.x database script format is set to BINARY or COMPRESSED (ZIPPED), you must open the database with version 1.8.x and issue the SET SCRIPTFORMAT TEXT and SHUTDOWN SCRIPT commands with the old version, prior to version upgrade.

It is strongly recommended to execute SHUTDOWN SCRIPT after an automatic upgrade from previous versions.

A note about SHUTDOWN modes. SHUTDOWN COMPACT is equivalent to SHUTDOWN SCRIPT plus opening the database and then performing a simple SHUTDOWN.

After upgrading a database, there will be some changes to its settings. For example, the new SET FILES BACKUP INCREMENT TRUE is applied to improve the shutdown and checkpoint times of larger databases.

If your database has been created with version 1.7.2 or 1.7.3, first upgrade to version 1.8.0 and perform a SHUTDOWN SCRIPT with this version. You can then upgrade the database to version 2.x.

To upgrade from older version database files (1.7.1 and older) that contain CACHED tables, use the SCRIPT procedure. In all versions of HyperSQL, the SCRIPT 'filename' command (used as an SQL statement) allows you to save a full record of your database, including database object definitions and data, to a file of your choice. You can then use the PERFORM IMPORT ... statement to load the file.

Manual Changes to the *.script File

The *.script file contains SQL statements for the database settings and creation of objects such as tables, sequences and user-defined function. It also contains INSERT statements to populate MEMORY tables. A new copy of the *.script file is created by the database engine at each checkpoint or shutdown. This file is read when the database is opened. Only some types of SQL statements are used in this file; for example no UPDATE or DELETE statements are used, and the statements in the file follow a certain sequence. Therefore, the *.script file cannot be edited freely by the user and any edits must respect the acceptable format.

In HyperSQL the full range of ALTER TABLE commands is available to change the data structures and their names. However, if an old database cannot be opened due to data inconsistencies, or it uses index or column names that are not compatible with 2.0, manual editing of the *.script file can be performed and can be faster.

  • Version 2.x does not accept duplicate names for indexes that were allowed before 1.7.2.

  • Version 2.x does not accept some table or column names that are SQL reserved keywords without double quoting.

  • Version 2.x does not accept unquoted table or column names which begin with an underscore, unless the connection property sql.regular_names is set false.

  • Version 2.x is more strict with check conditions and default values.

Other manual changes are also possible. Note that the *.script file must be the result of a SHUTDOWN SCRIPT and must contain the full data for the database. The following changes can be applied so long as they do not affect the integrity of existing data.

  • Names

    Names of tables, columns and indexes can be changed. These changes must be consistent regarding foreign key constraint references.

  • CHECK

    A check constraint can always be removed.

  • NOT NULL

    A not-null constraint can always be removed.

  • PRIMARY KEY

    A primary key constraint can be removed. It cannot be removed if there is a foreign key referencing the column(s).

  • UNIQUE

    A UNIQUE constraint can be removed if there is no foreign key referencing the column(s).

  • FOREIGN KEY

    A FOREIGN KEY constraint can always be removed.

  • COLUMN TYPES

    Some changes to column types are possible. For example an INTEGER column can be changed to BIGINT.

  • INSERT Statements

    INSERT statements may be added to the file in the same format as written by the engine.

  • Character Escapes

    All non-ASCII characters are escaped as Java Unicode escape sequences.

After completing the changes and saving the modified .script file, you can open the database as normal.

Backward Compatibility Issues

HyperSQL 2.7 conforms to the SQL Standard better than previous versions and has many more features. For these reasons, there may be some compatibility issues when converting old database, or using applications that were written for version 1.8.x or earlier. Some of the potential issues (and enhancements) are listed here. See the full list of connection properties for alternatives.

  • By default, when comparing strings, the shorter string is padded with spaces. This has an effect on comparing 'test' and 'test ' which are now considered equal, despite the length difference. This behaviour is controlled by the default PAD SPACE property of collations, which can be changed to NO PAD. See the statement SET DATABASE COLLATION <name> [ PAD SPACE | NO PAD ].

  • User names and passwords are case-sensitive. The only exception is the user name 'SA' which is always converted to uppercase. Check the .script file of a database for the correct case of user name and password and use this form in the connection properties or on connection URL.

  • It is now possible to specify the admin user name and password for a new database (instead of SA and the empty password).

  • HyperSQL 2.x has several settings that relax its conformance to the SQL Standard in the areas of type conversion and object names. These settings can be turned on for maximum conformance.

  • Check constraints must conform to the SQL Standard. A check constraint is rejected if it is not deterministic or retrospectively deterministic. When opening an old database, HyperSQL silently drops check constraints that no longer compile. See under check constraints for more detail about what is not allowed.

  • Type declarations in column definition and in cast expressions must have the necessary size parameters.

  • In connection with the above, an old database that did not have the enforce_strict_size property, is now converted to version 2.x with the engine supplying the missing size parameters. For example, a VARCHAR column declaration that has no size, is given a 32K size, a LONGVARCHAR column is given a 16MB size. Check these sizes are adequate for your use, and change the column definition as necessary.

  • Column names in a GROUP BY clause were previously resolved to the column label. They are now resolved to column name first, and if the name does not match, to the column label.

  • If two or more tables in a join contain columns with the same name, the columns cannot be referenced in join and where conditions. Use table names before column names to qualify the references to such columns. The SET DATABASE SQL REFERENCES { TRUE | FALSE } statement enables or disables this check.

  • If the unqualified wild card is used, as in the statement SELECT * FROM ... no additional column references are allowed. A table-qualified wild card allows additional column references in the SELECT list

  • Table definitions containing GENERATED BY DEFAULT AS IDENTITY but with no PRIMARY KEY do not automatically create a primary key. Database .script files made with 1.8 are fine, as the PRIMARY KEY clause is always included. But the CREATE TABLE statements in your application program may assume an automatic primary key is created. The old shortcut, IDENTITY, is retained with the same meaning. So CREATE TABLE T (ID IDENTITY, DAT VARCHAR(20)) is translated into CREATE TABLE T(ID INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, DAT VARCHAR(20)). This last form is the correct way of defining both auto-increment and primary key in versions 1.8 and 2.x.

  • CREATE ALIAS is now obsolete. Use the new function definition syntax. The org.hsqldb.Library class no longer exists. You should use the SQL form of the old library functions. For example, use LOG(x) rather than the direct form, "org.hsqldb.Library.log"(x).

  • The names of some commands for changing database and session properties have changed. See the list of statements in this chapter.

  • Computed columns in SELECT statements which did not have an alias: These columns had no ResultMetaData label in version 1.8, but in version 2.x, the engine generates labels such as C1, C2.

  • The issue with the JDBC ResultSetMetaData methods, getColumnName(int column) and getColumnLabel(int column) has been clarified by the JDBC 4 specification. getColumName() returns the underlying column name, while getColumnLabel() returns any specified or generated alias. HyperSQL 1.8 and 2.x have a connection property, get_column_name, which defaults to true in version 2.x, but defaulted to false in some releases of version 1.8.x. You have to explicitly specify this property as false if you want (non-standard behaviour) getColumnName() to return the same value as getColumnLabel().

HyperSQL Dependency Settings for Applications

Java Module Dependency and Dependency settings using Gradle, Ivy, Maven, Groovy

HyperSQL and SqlTool version 2.7.3 jars are Java Module jars compiled with JDK 11. The module name for HyperSQL is org.hsqldb and for SqlTool, org.hsqldb.sqltool. In a modular application, these names are referenced in the module-info.java file for the user application that accesses these jars.

For non-modular applications, the jars are simply included the the classpath. Jars compiled with JDK8 are also provided and can be used for applications that must run on a JVM older than 11,

This section is about building applications that have dependencies upon HyperSQL, and for executions that use a dependency library system. Examples of the second type are unit test runs, job runs triggered by a build system, or systems like Grape that pull libraries from the network at end-user run time.

What version to Pull

The best option for most developers is to depend upon the latest public version of HyperSQL with a range pattern like [2,). Here are exceptional cases where you should depend on a static version.

  • Your application has code dependencies upon version-specific details of the HyperSQL distribution. In this case, the specific dependency specification should be checked in to your source code control system alongside the code that manifests the version-dependency. If your code is enhanced to use a newer version of HyperSQL, you should update the version specification so that whenever code + configs are checked out, the dependency will always match the code.
  • Your organization only allows the use of vetted libraries. In this case, you vigorously maintain your configurations, updating your dependencies and regression testing as soon as new versions of HyperSQL are vetted. To get the best performance and reliability from HyperSQL, you should urge the appropriate parties to vet new versions as soon as they are publicly released.
  • You need precisely reproducible builds.

If none of these situations apply to you, then follow the suggestions in the appropriate sections below. If you need to specify a specific version, follow the instructions in the range-versioning section but change the version range specifications to literal versions like 2.7.3.

Range Versioning

Keeping up-to-date with Range Dependencies

[Note]Limitation of Maven Version Range Specifiers

Note that Ivy (and the many systems that use Ivy underneath, like Grape and Gradle) supports the opening exclusive ] in addition to [, whereas Maven supports only the opening inclusive [ specifier. See the relevant Ivy or Maven documentation for details. There are special cases where you should depend on a specific version instead.

Range Dependency Specification Examples

[Important]Important

For all examples below, when a range pattern is given, it means the latest version equal or greater than version 2. If a classifier is shown, it is optional and you can skip it to get the default (no-classifier) jar.

Example 12.3. Sample Range Ivy Dependency

 <dependency org="org.hsqldb" name="hsqldb" rev="[2,)" conf="j8->default"/>

I give no example here of specifying a classifier in ivy.xml because I have so far failed to get that to succeed. Classifiers in in ivy.xml are supported if using Gradle, as covered below.

At the time that I'm writing this, our builds are providing two classifiers (besides the default no-classifier, of course) named "debug" and "jdk8". In all examples for using a classifier I am using classifier name "debug".


Example 12.4. Sample Range Maven Dependency

See note above about Maven range specifications.

 <dependency>
   <groupId>org.hsqldb</groupId>
   <artifactId>hsqldb</artifactId>
   <version>[2,)</version>
   <!-- Scope defaults to "compile":
   <scope>test</scope>
     Use a classifier to pull one of our alternative jars:
   <classifier>debug</classifier>
   -->
 </dependency>

Example 12.5. Sample Range Gradle Dependency

 dependencies.compile 'org.hsqldb:hsqldb:[2,):debug'
 dependencies {
     runtime 'org.hsqldb:hsqldb:[2,):debug@jar',
             'org.hsqldb:sqltool:[2,):debug@jar'
 }

If you want to use an ivy.xml file with a Gradle build, you will need use the Ivyxml Gradle Plugin. It just takes a few links of code in your build.gradle file to hook in ivyxml. See the Ivyxml documentation to see exactly how.

Example 12.6. Sample Range ivy.xml loaded by Ivyxml plugin

 <ivy-module version="2.0" xmlns:m="http://ant.apache.org/ivy/maven">
 ...
 <dependency org="org.hsqldb" name="hsqldb" rev="[2,)" m:classifier="debug"/>


Example 12.7. Sample Range Groovy Dependency, using Grape

@Grab('org.hsqldb:hsqldb:[2,):debug')


$Revision: 6752 $