Thursday, August 24, 2017

Code Generation for Lookup Tables - Part 1


In my last post, I outlined some of my thoughts about lookup tables, and how they should be structured and organized. Now I want to write some code generators, to make it easier to create and manage lookup tables.  Since I want lookup tables to have the same columns and follow some naming conventions, why should I write the same basic code every time I create a new lookup table?  While my thoughts about lookup tables should probably apply to just about any relational database, stored procedure code is not as portable.  I work in an Oracle database, so all the code I'm going to show you is written in PL/SQL.  But you may be able to do something similar in the procedural language for your database.

An Index to the Lookup Tables

One thing I like to have is a table to store a list of all the lookup tables in the schema.  This isn't absolutely necessary, if you use a naming convention for your lookup table names, since you could write a view of a catalog view, like Oracle's USER_TABLES.  However, I'm going to create a real table named LOOKUP_TABLES:
CREATE TABLE lookup_tables (
  Table_name VARCHAR2(30),
  Decription VARCHAR2(255)
);

So my code generator is going to insert a row into this table anytime that I create a new lookup table, and delete that row anytime I drop a lookup table.  Once you start using your lookup tables, and start having foreign keys that reference them, you probably aren't going to drop any lookup tables, but you might during development, so I'll include a drop.

The Code Generator Package

In PL/SQL, I tend to write packages for all of my stored procedures, with few or no stand-alone functions or procedures.  While I'm going to show the procedures separately, you may assume that they are all inside a package named LOOKUP_UTIL.  There are two special characteristics of LOOKUP_UTIL.
In all of my databases, I have created a schema named COMMON to hold all database objects that are shared among many application schemas and users.  Most of these have EXECUTE or SELECT granted to PUBLIC, and many have public synonyms.  Since many applications may generate lookup tables, LOOKUP_UTIL would correctly be created in the COMMON schema.  By the way, some lookup tables might be shared between applications – my lookup table for U.S. States and Territories would be a prime example of this.  It might also be best created in COMMON.
 The second characteristic of LOOKUP_UTIL is that it is an invoker's rights package, using the "AUTHID CURRENT_USER" clause.  That means that it executes with the database privileges of the user (or schema) that called it.  It is going to create tables and other database objects – it should only do that in the current user's schema, and using privileges granted to that user, not COMMON, the owner of the package.  Many of the packages in the COMMON schema share this characteristic.

Creating and Dropping Lookup Tables

Here is the code for creating a new lookup table.  It creates the table with the standard columns as I explained in my previous post and adds a row to LOOKUP_TABLES.  The same description to be stored in LOOKUP_TABLES is also used as the table comment.  There are a few things that are here for demonstration purposes that might not be in real code.  For instance, I am using DBMS_OUTPUT to show the generated code in case the code gets an error upon execution.  In the real code, these would probably be in a call to a logging package.  I use LOGGER (https://github.com/OraOpenSource/Logger), and I highly recommend it.  If you have users who create lookup tables in schemas other than their own, with the CREATE ANY privilege, you may want to add an "owner" parameter.
PROCEDURE new_lookup_table (
  table_name        lookup_tables.table_name%TYPE,
  description       lookup_tables.description%TYPE DEFAULT NULL,
  tablespace_name   user_tablespaces.tablespace_name%TYPE DEFAULT NULL
)
AS
  v_table_name  lookup_tables.table_name%TYPE;
  create_cmd    VARCHAR2 (2000);
BEGIN
  IF table_name IS NULL THEN
    raise_application_error (
      -20000,
      'Table name of new lookup table may not be null.'
    );
  ELSIF UPPER (table_name) LIKE 'LKUP_%' THEN
    v_table_name  := UPPER (table_name);
  ELSE
    v_table_name  := 'LKUP_' || SUBSTR (UPPER (table_name), 1, 25);
  END IF;

  BEGIN
    INSERT INTO   lookup_tables (table_name, description)
         VALUES   (v_table_name, description);
  EXCEPTION
    WHEN DUP_VAL_ON_INDEX THEN
      raise_application_error (
        -20002,
        'Lookup table named ' || v_table_name || ' already exists.'
      );
  END;

  create_cmd      :=
       'CREATE TABLE '
    || v_table_name
    || '(
  code              VARCHAR2(10) NOT NULL,
  short_description VARCHAR2(30),
  description       VARCHAR2(255),
  display_sequence  NUMBER,
  active            CHAR(1) DEFAULT ''Y'' NOT NULL,
  CONSTRAINT '
    || SUBSTR (v_table_name, 1, 27)
    || '_PK PRIMARY KEY (code))'
    || '
ORGANIZATION INDEX'
    || CASE
         WHEN tablespace_name IS NOT NULL THEN
           '
TABLESPACE ' || tablespace_name
         ELSE
           NULL
       END;

  BEGIN
    EXECUTE IMMEDIATE create_cmd;
  EXCEPTION
    WHEN OTHERS THEN
      DBMS_OUTPUT.put_line (create_cmd);
      RAISE;
  END;

  EXECUTE IMMEDIATE
    'COMMENT ON TABLE ' || v_table_name || ' IS ''' || description || '''';
END new_lookup_table;

The code for dropping a lookup table is similar.  Note that my version includes the CASCADE CONSTRAINTS clause, so that if there are foreign keys referencing the lookup table being dropped, those constraints will be dropped too.  You might prefer not to use this clause and allow the drop to fail if there are references to the table.
PROCEDURE drop_lookup_table (table_name lookup_tables.table_name%TYPE)
IS
  v_table_name  lookup_tables.table_name%TYPE;
  drop_cmd      VARCHAR2 (2000);
BEGIN
  IF table_name IS NULL THEN
    raise_application_error (
      -20001,
      'Table name of lookup table to be dropped may not be null.'
    );
  ELSIF UPPER (table_name) LIKE 'LKUP_%' THEN
    v_table_name  := UPPER (table_name);
  ELSE
    v_table_name  := 'LKUP_' || SUBSTR (UPPER (table_name), 1, 25);
  END IF;

  DELETE FROM   lookup_tables
        WHERE   UPPER (table_name) = v_table_name;

  IF SQL%ROWCOUNT = 0 THEN
    raise_application_error (
      -20003,
         'Table named '
      || v_table_name
      || ' does not exist in the lookup table catalog.'
    );
  END IF;

  drop_cmd  := 'DROP TABLE ' || v_table_name || ' CASCADE CONSTRAINTS';

  BEGIN
    EXECUTE IMMEDIATE drop_cmd;
  EXCEPTION
    WHEN OTHERS THEN
      DBMS_OUTPUT.put_line (drop_cmd);
      RAISE;
  END;
END drop_lookup_table;

Next Time

That's enough generation code for now.  I'll continue in my next post with some reasons for simulating a single table of tables with a view.  I'll show you the code for generating that view, and for generating an API to make the view updatable.

Tuesday, August 8, 2017

Lookup Tables Revisited

More than ten years ago, I wrote a paper and did an ODTUG presentation about reference tables.  These are also known as lookup tables or code tables.  Some of what I wrote then I would still say today, but other parts were probably wrong then and definitely wrong now.  So, I thought I'd revisit the topic and try to lay out what I think about this kind of data today.
The most common use of a lookup table is for presenting users with a dropdown or list of values (LOV) page for setting a value in the application.  So in its simplest form, it is a table with just two columns, a code, which would be the primary key, and a description to be shown to the user in the list of values.  For instance, you might have a lookup for the States of the United States, where the U. S. Postal Services two character official State abbreviation is the code.  The row for my home State of Virginia would contain: CODE=VA and DESCRIPTION=Virginia.  In my ADDRESS table, I might have a column called STATE that would be set from this lookup.
Here are a few principles that I want reference/lookup/code tables to follow.  These are not hard and fast rules, and I’m sure there are some valid exceptions, but unless I can define a specific exception for a particular use case, I like to follow these.

Lookups as Foreign Keys

When you create a table with a column to be set from a lookup table, define a foreign key (FK) constraint.  That means that lookup tables need to be actual tables.  This is a major departure from my former opinion that there should be a single lookup table that has a key to subdivide it into separate lookups.  There are some nice advantages to the single lookup table, but I am currently working with an application that has one, and the disadvantages outweigh the advantages.  In my next post, I’ll talk about some ways to get the best of both worlds, but we start with one table for each list of values.
If you create the FK constraint, you guarantee that the value MUST come from the lookup table.  No other value will work.  This is important for validating your data.  It gives front-end developers information about which lookup table to use for their list of values.  And it also provides information to code generators, like the ones for generating JPA Entity objects in Eclipse and NetBeans, or the one for generating ADF Business Components in JDeveloper.  Some front-end developers might complain that this removes the flexibility for users to enter values that are not in the lookup tables.  My feeling is that you can give users that flexibility, if you include the ability to add a new value to the lookup table as part of your user interface.  That way the FK is still satisfied, and if one user needed a value that wasn’t in the table, probably other users need that value too.

Structure of Lookup Tables

Just because each lookup gets its own table, doesn’t mean that each lookup table gets a distinct structure.  Instead, I want every lookup table to have the same columns in the same order.  Beyond the basic CODE and DESCRIPTION columns, I like to add:
  • DISPLAY_SEQUENCE – the order in which this value is to appear in a dropdown or LOV.  There might be a unique key constraint on this column, but there doesn’t need to be.  In my State lookup table, I use the U.S. government’s numeric state codes, like 51 for Virginia, for this field.
  • ACTIVE – a Boolean value that says whether this value is currently available for use in a dropdown or LOV.  In my databases, this is a ‘Y’ for yes, or ‘N’ for no, but you may prefer 1 or 0, and some non-Oracle databases allow an actual boolean true or false.  In any case, the default for ACTIVE is true.  This is needed because of the FKs pointing to the lookup table.  As time goes on, you may no longer need a value in the table, but there may still be old data that references it.  So you can’t delete rows from the lookup table.  You need a way to remove it from your user interface, without actually deleting it.
  • Sometimes, I need a short DESCRIPTION for dropdowns, but a longer one for LOVs and reports. So I often add a SHORT_DESCRIPTION column.
While I’m a big fan of surrogate primary keys for most tables in an application, I prefer to make CODE a short natural key.  It is usually a character field.  For instance, take my example of ‘VA’ as the code for Virginia in my STATE lookup.  When CODE is meaningful, you can look at the data table that references the lookup, and know what it is referencing without necessarily having to look at the lookup table.  This saves a lot of time.  This is not possible with surrogate keys because they are by definition meaningless.  It also helps when you move the lookup table from database to database – I’ve seen bad things happen when new surrogate keys get generated by accident in the new database.
Lookup tables are usually fairly small.  Notice that in my standard definition, I have five columns at most.  My State lookup table with about 60 rows (It includes US Territories as well as States) is larger than most – they average at about 10 rows.  Access to a lookup table is almost always by CODE, the primary key.  In Oracle, this makes lookup tables ideal candidates to be Index-Organized tables, tables where the data is stored in the Primary Key index, instead of in separate storage.  Less I/O means faster.

Naming Standards

It doesn’t matter much what your naming standard is, but it does matter that you have one.  One standard that I've seen work well is to prefix all lookup tables with the same prefix, like "REF_" or "LKUP_".  That makes it obvious that this is, in fact, a lookup table, and will make them all sort together in lists of your tables.  Standard naming is especially useful if you want to write some code generators, which I'll talk about in my next post.  I also want standard names for columns, but occasionally, I'll write a VIEW of a lookup table just so that I can give some different names to columns.  Remember that the DISPLAY_SEQUENCE of my State lookup table is being set to the numeric code for the State, and the CODE is the Postal Abbreviation.  So I write:
CREATE VIEW state_codes (postal_abbrev, state_code, state_name) AS
  SELECT code, display_sequence, description
    FROM lkup_state
   WHERE active = 'Y';
This is a little easier for the developers, but still keeps the underlying standard naming.  Since it is a view of just one table and all not null columns are included, you can even do DML on the view.

Lookup Data as Source Code

Finally, lookup tables should normally be populated as part of creating the application database.  There would normally be no such thing as an empty lookup table.  Therefore, I usually consider the data for the lookups as much a part of the source code for the application as the DDL for creating the database.  That means that lookup data should be in a source code version control system such as Git or Subversion – whatever you use for the rest of your source code.  But this poses a special problem – lookup data is not static.  New rows are added occasionally, formerly ACTIVE rows are updated to be inactive, and sometimes the descriptions change.  It isn't just the developers who make these changes.  You will often give certain users the ability to maintain the lookup tables.  So how do you check those changes into your source code repository?
One way is to run a regular job to look for changes, export the data in a known standard form in a working copy of your repository, and then commit the changes.  Or you could be a little more sophisticated, and use a trigger to queue a job only when it is needed.  Some people don't actually store lookup tables in the database.  Instead they use external tables where that actual data is in files outside of the database.  If those files are in a working copy, you can make changes and commit those changes as you would for any other source code.  As long as you are aware that lookup data is part of the source code, you will come up with a way to handle it that works for you.

Conclusion

Lookup tables are an important part of your application database.  I hope that my re-thinking about them has helped you too.  Next time, I'll have a little code for generating an updatable view which can help your developers create a user interface for maintaining your lookup tables.

Friday, June 16, 2017

Update on Reading your ERD

This is just a quick update to my earlier post Reading Entity Relationship Diagrams to Users.  In it, I talked about how to read ERDs so users can understand them and how to use this to draw out more information about their entities and particularly, relationships.

These ideas are not original with me.  I first learned them as part of my education in Oracle's CASE*Method, and implemented in Oracle's old product, Oracle Designer.  The data modeling tools, like Oracle SQL Developer Data Modeler, that can show Barker notation for ERDs are also drawing from this way of thinking about data modeling.  Richard Barker was the creator of CASE*Method and lead the team that created Oracle Designer.

But I am especially indebted to Mike Lynott, formerly a consultant to Oracle, and currently Customer Solutions Architect at eprentise (according to LinkedIn).  It was his white paper, Why Do We Say "Each"? that inspired my thinking on the subject.  I finally located my copy of this paper.  I wish I could give you a link to go read it yourself.  It is a little old, but still very relevant to what RDBMS designers do every day.

Thursday, May 18, 2017

Javascript Frameworks

Continuing on with a few opinions about UI design, development and implementation ...



So what is "cool" now?  Javascript frameworks like Angular, Ember, React and Oracle's own JET. These are great, and they have their place, I'm sure.  The argument is that since these frameworks use the power of the user's browser, they can be more responsive.  There is less communication between the server and the client, which should also speed things up. They can do exactly what the designer wants them to do - no component framework getting in the way, wanting to do things its way.  They are open source for the most part, which frees us from Oracle's tyranny.  And working in HTML and CSS can let you use graphic designers and their toolboxes, like DreamWeaver.

The first problem I have with Javascript frameworks is that this is a return to code it yourself.  The tooling builds some of the code as sample files, where you take some prebuilt code and tweak it.  For the most part, however, you have to write the HTML, the CSS, the Javascript.  Fortunately, there is some help in the libraries, like JQueryUI, with some precoded components, and lots of sample code.  But I miss JDeveloper's ability to drag a data control onto a page, say you want to drop it as a bar graph, do a little configuration, and presto.

Tooling and IDEs

Of course, in some respects, this is a problem with tooling rather than with the framework.  I suppose that if I could point Eclipse or Netbeans at a REST service and have it generate the code for say, an Angular 2 service module that accesses the service, I might feel differently.  Or if I could generate a first-cut form to do CRUD with that service, I'd feel better about these frameworks.  I also get the feeling that developers WANT to code it, and hate handing over part of the job to someone else, especially with something like ADF, where you can't easily see the source code.

The proponents say, "but you can make it look EXACTLY the way you want."  True, but at what cost? Can your users wait while you do it?  But you say "It's Agile, we'll do 80% now and get it closer to 100% with each iteration."  And I reply, "I can give you 98% in the time it takes you to do 80%, and do the users care if I can't get to 100%?"  If they DO care, guess what, ADF 12.2.1 uses JSF 2.2 (and so do the latest versions of other Faces libraries), and you CAN code in HTML5/CSS/Javascript what you can't do with JSF, yet still interact with the pure JSF parts. Someday, I'd love to do a contest - who can meet the user's requirements fastest.

Responsiveness and Speed

Another advantage claimed for these environments is that most of the processing takes place on the user's computer, and interacts directly with the browser.  Since the application doesn't have to interact with the server as much, it runs faster.  And on the server side, it isn't as complicated and doesn't require as much server resources.  All the server is doing is downloading files to the client, and running REST services.  What's more, the browser usually caches files that it has already downloaded, so if the file hasn't changed, it will use its local copy of the file, and won't go to the server for a fresh copy.  Please note that this can be a disadvantage for mobile devices with slow connections and limited storage.  There is a lot more code to be downloaded and stored.

The honest truth is that in ADF Faces many of the Rich Faces components are rendered using Oracle JET, so they have all the responsiveness of this Javascript library, except that I didn't have to code it.  Similarly, PrimeFaces renders with JQuery and JQueryUI.  Because PrimeFaces does this, it can use JQuery tools to create themes.  ADF Faces uses skinning through CSS-like files and the basis for its skins is in the open source Apache Trinidad Faces library.  Of course, this means that JSF frameworks can be subject to some of the same problems as the Javascript frameworks that they use for rendering.

Testing and Security

One problem that I have with Javascript frameworks is that all of the source code is delivered to the user's browser.  Javascript is interpreted code, which means that some bugs won't show themselves until runtime.  Compilers can catch some problems up front, which can speed development.  I should note that Angular 2 & 4 are usually written in Typescript which compiles to Javascript - a little of the best of both worlds.  But the biggest problem I have with this is the advantage it gives to hackers.  When a hacker can see ALL of the source code, including code that is intended to keep bad data out of the database, he/she can re-write it to bypass your safeguards.  If you are going to write REST services for your Javascript framework to use, you need to make sure that you have server-side protection.  Never accept data from a browser without validating it at the server.  Don't assume that the Javascript code that you wrote will be executed unchanged.

So there I am – an old developer who found a great way to develop and doesn't want to give it up.  I'm not against the Javascript frameworks, and if you want to build applications that way, good for you.  I'll do my best to support you.  I know enough HTML and CSS to be useful, and even some Javascript.  As a long-time database advocate, SQL and PL/SQL expert and developer, I'll write good solid secure REST services for you.  Does that make me obsolete?  I don't believe so.  I'm not quite ready to leave you without my help and advice yet.

Monday, May 15, 2017

User Interfaces - a Personal History

I've been doing user interfaces (UI) for a long time.  I hate to say how long, because I'm looking for a new job, and I'm afraid that age discrimination is very real.  And I'm also aware that behind my back some of my younger colleagues think that I'm set in my ways, behind the times, and in short, obsolete.

A little history

To be sure, my first user interfaces were a sort of template for users to enter data into a fixed record format file,  The only validations were to designate fields as alphanumeric or numeric.  My next applications let me draw forms on a terminal, and that gave me the ability to write (in COBOL of all things) code to validate and then write data to data files.  I graduated from there to Oracle Forms.  What a difference!  Now I could point my tool at a relational table, even several tables in master/detail relationships, and it would generate a first cut at forms to update those tables.
Oddly, my next step was a step back - I was working in HTML and once again had to develop my own forms and write my own code to process the input and decide which tables to update and what to feed back to my users.  At least PL/SQL is an easier language for that than COBOL.  In about 2000, it was nice to get a hold of Oracle Designer, and its web generation capability, which would create forms for me.  But then, I lost a lot of control over what the form looked like - unless I wanted to take COMPLETE control and not be able to regenerate the form from Designer,

Application Development Framework

Which brings me to Oracle Application Development Framework (ADF),  ADF Faces, the view and controller part of ADF is based on Java Server Faces (JSF), which is part of the Java Enterprise Edition (JEE) standard.  Oracle JDeveloper, the primary IDE for ADF, can generate first cuts at forms, but not just forms: ADF can do tables, graphs, hierarchies, trees, calendars and other data visualization.  It can drill down.  It can navigate from page to page, and this flow of control is shown graphically so that even showing it to a user, the user can get an idea how it is going to work.
A lot of the components have built-in functionality that I don't have to code.  For instance, an ADF table component can let users reorder columns, re-sort, and filter, with no code.  There are tabbed layouts, split screens (vertically or horizontally), and accordion layouts.  There are popups and quite a bit of control over when a popup is displayed and when hidden.  Some layout components are especially designed to rearrange themselves according to the size of the screen - showing differently on a phone than on a desktop for instance. The newest version of ADF can inform the application about the screen dimensions so that the application can decide what components to show.
ADF can do single page applications where the header, main menu and footer are all static with only the content in the middle changes.  It can do AJAX, changing only the parts of the page that need refreshing.

Other JEE/JSF Frameworks

One of the arguments against Oracle ADF is that it is Oracle.  And it is not open source – if you are on a support contract, you can get the source code, but you will need to sign a non-disclosure agreement and you can't change it.  Many people would prefer to use open source software only, and avoid depending on Oracle.

I can't argue with the advantages of going the open source route.  ADF Essentials lets me use ADF for free, and I can use an open source JEE application server.  And I've written an ADF application against an open source database, so we don't HAVE to use Oracle's expensive RDBMS.  But even ADF Essentials is NOT open source, and it doesn't do everything the non-free version can do on the non-free Oracle WebLogic Server.  We can get away from ADF BC and use an open source JPA implementation - and if you write ADF Essentials applications with Eclipse, you probably will do that. To get away from the ADF Controller and ADF Faces to an open source replacement, you can use the base JSF controller and an open source JSF library like PrimeFaces.  But in doing this, you will lose some of the wonderful tooling that JDeveloper has for code generation.  Or even Oracle Enterprise Pack for Eclipse has pretty good tools for developing ADF Essentials applications with EclipseLink for the Model and ADF Faces for the View and Controller.

I have also used some of the tools in Eclipse for developing PrimeFaces, and it is better than using the Javascript frameworks, but not as good as ADF Faces in JDeveloper.  In my next post, I'm going to talk about the Javascript frameworks.

Tuesday, May 2, 2017

Generating REST from the Oracle HR Schema



More and more web applications are being written with HTML, CSS, and (especially) Javascript.  There are a good many frameworks out there – like Angular2 (4?) and EmberJS – for doing this.  The goal of these frameworks is to have the client, the user's web browsers, do most of the work, rather than the server as in my JEE/JSF-based Oracle Application Development Framework (ADF) applications.  They also want to give the user a more responsive, and eye pleasing interface.  But ultimately, they do have to talk to the database, and update the data that is shared with other users.  For this, they tend to use web services, particularly Representational State Transfer (REST) services, which take advantage of the capabilities of the Hypertext Transfer Protocol (HTTP) beyond what is usually used to serve pages over the World Wide Web.

ADF Business Components as REST

I've been playing with various methods to expose my database as REST services, and I've been using the HR schema that comes as a sample schema with every Oracle database as my test platform.  Since I'm primarily an ADF developer right now, the first thing I tried was ADF 12.2.2's greatly improved ability to expose ADF Business Components (ADF BC) developed from the database as REST services.  This works quite well, and is very nice for its ability to use your customizations and View Criteria.  But I'm especially interested in using ADF Essentials and deploying to open source application servers like Glassfish.  And as far as I can determine, REST Web Services from ADF BC is not part of ADF Essentials, so you can only deploy these to Oracle WebLogic Server.

Oracle REST Data Services

Another alternative is to use Oracle REST Data Services (ORDS), which can be deployed to most JEE application servers, including Glassfish and Apache Tomcat.  ORDS even contains an embedded JETTY application server, so it can be deployed standalone for light use.  While ORDS is not open source, it is free to use as part of your Oracle RDBMS license.  ORDS works fine and you can use SQL Developer to expose your tables through REST on ORDS.  I'm not totally comfortable with exposing tables directly, so I would suggest using a three-schema architecture and exposing updatable views based on your tables, rather than the tables themselves.  I did have a little trouble with ORDS configuration, however, and I need to work on this some more.  I will probably write another post about my experiences with ORDS.

JAX-RS – Java REST Services from JPA

Java web developers tend to use Java Persistence Architecture (JPA) and one of its implementations like Hibernate or Eclipselink to do Object-Relational Mapping (ORM) from their relational database to Java objects.  Even ADF developers who write their applications in Eclipse instead of JDeveloper with ADF Essentials will usually use JPA instead of ADF BC.  And there is a standard API called JAX-RS for exposing JPA entity classes as REST services.  The nice thing is that both the Eclipse IDE and the NetBeans IDE have plug-ins that can generate JPA entity classes from tables in the database, and can even generate the code for exposing these as REST services.  I've tried both and while I prefer NetBeans, Eclipse works just fine.  Here is a link to a NetBeans tutorial I used: https://netbeans.org/kb/docs/websvc/rest.html.

I generated RESTful services from the DEPARTMENTS, EMPLOYEES and JOBS tables in the HR schema and deployed to Glassfish.  The JOBS service worked fine, and the count services giving the number of rows in the tables worked for all three tables.  But the other services didn't work for DEPARTMENTS and EMPLOYEES.  It took me a while to figure it out, because it gave a 500 error without putting error messages in the Glassfish logs.  I finally found some information on StackExchange that told me that I needed a class implementing ExceptionMapper<Exception>.  Here is my class:

/*
 * To change this license header, choose License Headers in Project Properties.
 * To change this template file, choose Tools | Templates
 * and open the template in the editor.
 */
package com.smdi.johnf.hr.service;

import javax.ws.rs.core.Response;
import javax.ws.rs.ext.Provider;

/**
 *
 * @author JohnF
 */
@Provider
public class ExceptionMapper implements javax.ws.rs.ext.ExceptionMapper<Exception> {
    @Override
    public Response toResponse(Exception exception) {
      exception.printStackTrace();
      return Response.status(500).build();
    }
}

Ahhh – with this class to tell it to print a stack trace upon an error, I could finally see what was wrong.  The problem is that there are circular references in DEPARTMENTS and EMPLOYEES.  DEPARTMENTS, for instance has a column named MANAGER_ID, which references the EMPLOYEE_ID of an EMPLOYEES row for the department's manager.  This in turn has a DEPARTMENT_ID for the DEPARTMENT for which the employee works and its own MANAGER_ID for the employee's manager.  When generating JSON or XML for the service, the API doesn't have a way to stop.  One answer for DEPARTMENTS is to add the @XmlTransient annotation to the List of Employees in the department – which tells the API not to continue to dive deeper into the hierarchy of related entities.

Hope this exploration of creating REST services for your database is useful.