Dumped on 2009-06-15
F-Key | Name | Type | Description |
---|---|---|---|
ac_num | bigint | ||
ac_timestamp | timestamp without time zone |
Index - Schema _prod_replica_set
This table exists solely to prevent overlapping execution of configuration change procedures and the resulting possible deadlocks.
F-Key | Name | Type | Description |
---|---|---|---|
dummy | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds confirmation of replication events. After a period of time, Slony removes old confirmed events from both this table and the sl_event table.
F-Key | Name | Type | Description |
---|---|---|---|
con_origin | integer |
The ID # (from sl_node.no_id) of the source node for this event |
|
con_received | integer | ||
con_seqno | bigint |
The ID # for the event |
|
con_timestamp | timestamp without time zone |
DEFAULT (timeofday())::timestamp without time zone
When this event was confirmed |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds information about replication events. After a period of time, Slony removes old confirmed events from both this table and the sl_confirm table.
F-Key | Name | Type | Description |
---|---|---|---|
ev_origin | integer |
PRIMARY KEY
The ID # (from sl_node.no_id) of the source node for this event |
|
ev_seqno | bigint |
PRIMARY KEY
The ID # for the event |
|
ev_timestamp | timestamp without time zone |
When this event record was created |
|
ev_minxid | _prod_replica_set.xxid |
Earliest XID on provider node for this event |
|
ev_maxxid | _prod_replica_set.xxid |
Latest XID on provider node for this event |
|
ev_xip | text |
List of XIDs, in order, that are part of this event |
|
ev_type | text |
The type of event this record is for. SYNC = Synchronise STORE_NODE = ENABLE_NODE = DROP_NODE = STORE_PATH = DROP_PATH = STORE_LISTEN = DROP_LISTEN = STORE_SET = DROP_SET = MERGE_SET = SET_ADD_TABLE = SET_ADD_SEQUENCE = STORE_TRIGGER = DROP_TRIGGER = MOVE_SET = ACCEPT_SET = SET_DROP_TABLE = SET_DROP_SEQUENCE = SET_MOVE_TABLE = SET_MOVE_SEQUENCE = FAILOVER_SET = SUBSCRIBE_SET = ENABLE_SUBSCRIPTION = UNSUBSCRIBE_SET = DDL_SCRIPT = ADJUST_SEQ = RESET_CONFIG = |
|
ev_data1 | text |
Data field containing an argument needed to process the event |
|
ev_data2 | text |
Data field containing an argument needed to process the event |
|
ev_data3 | text |
Data field containing an argument needed to process the event |
|
ev_data4 | text |
Data field containing an argument needed to process the event |
|
ev_data5 | text |
Data field containing an argument needed to process the event |
|
ev_data6 | text |
Data field containing an argument needed to process the event |
|
ev_data7 | text |
Data field containing an argument needed to process the event |
|
ev_data8 | text |
Data field containing an argument needed to process the event |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Indicates how nodes listen to events from other nodes in the Slony-I network.
F-Key | Name | Type | Description |
---|---|---|---|
_prod_replica_set.sl_node.no_id | li_origin | integer |
PRIMARY KEY
The ID # (from sl_node.no_id) of the node this listener is operating on |
_prod_replica_set.sl_path.pa_server#1 | li_provider | integer |
PRIMARY KEY
The ID # (from sl_node.no_id) of the source node for this listening event |
_prod_replica_set.sl_path.pa_client#1 | li_receiver | integer |
PRIMARY KEY
The ID # (from sl_node.no_id) of the target node for this listening event |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Stores each change to be propagated to subscriber nodes
F-Key | Name | Type | Description |
---|---|---|---|
log_origin | integer |
Origin node from which the change came |
|
log_xid | _prod_replica_set.xxid |
Transaction ID on the origin node |
|
log_tableid | integer |
The table ID (from sl_table.tab_id) that this log entry is to affect |
|
log_actionseq | bigint | ||
log_cmdtype | character(1) |
Replication action to take. U = Update, I = Insert, D = DELETE |
|
log_cmddata | text |
The data needed to perform the log action |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Stores each change to be propagated to subscriber nodes
F-Key | Name | Type | Description |
---|---|---|---|
log_origin | integer |
Origin node from which the change came |
|
log_xid | _prod_replica_set.xxid |
Transaction ID on the origin node |
|
log_tableid | integer |
The table ID (from sl_table.tab_id) that this log entry is to affect |
|
log_actionseq | bigint | ||
log_cmdtype | character(1) |
Replication action to take. U = Update, I = Insert, D = DELETE |
|
log_cmddata | text |
The data needed to perform the log action |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds the list of nodes associated with this namespace.
F-Key | Name | Type | Description |
---|---|---|---|
no_id | integer |
PRIMARY KEY
The unique ID number for the node |
|
no_active | boolean |
Is the node active in replication yet? |
|
no_comment | text |
A human-oriented description of the node |
|
no_spool | boolean |
Is the node being used for log shipping? |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Used to prevent multiple slon instances and to identify the backends to kill in terminateNodeConnections().
F-Key | Name | Type | Description |
---|---|---|---|
nl_nodeid | integer |
PRIMARY KEY
Clients node_id |
|
nl_conncnt | serial |
PRIMARY KEY
Clients connection number |
|
nl_backendpid | integer |
PID of database backend owning this lock |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds connection information for the paths between nodes, and the synchronisation delay
F-Key | Name | Type | Description |
---|---|---|---|
_prod_replica_set.sl_node.no_id | pa_server | integer |
PRIMARY KEY
The Node ID # (from sl_node.no_id) of the data source |
_prod_replica_set.sl_node.no_id | pa_client | integer |
PRIMARY KEY
The Node ID # (from sl_node.no_id) of the data target |
pa_conninfo | text |
NOT NULL
The PostgreSQL connection string used to connect to the source node. |
|
pa_connretry | integer |
The synchronisation delay, in seconds |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
F-Key | Name | Type | Description |
---|---|---|---|
reg_key | text | PRIMARY KEY | |
reg_int4 | integer | ||
reg_text | text | ||
reg_timestamp | timestamp without time zone |
Index - Schema _prod_replica_set
F-Key | Name | Type | Description |
---|---|---|---|
seq_id | integer | ||
seq_set | integer | ||
seq_reloid | oid | ||
seq_origin | integer | ||
seq_last_value | bigint |
SELECT sq.seq_id , sq.seq_set , sq.seq_reloid , s.set_origin AS seq_origin , _prod_replica_set.sequencelastvalue ( ( (quote_ident ( (pgn.nspname)::text ) || '.'::text ) || quote_ident ( (pgc.relname)::text ) ) ) AS seq_last_value FROM _prod_replica_set.sl_sequence sq , _prod_replica_set.sl_set s , pg_class pgc , pg_namespace pgn WHERE ( ( (s.set_id = sq.seq_set) AND (pgc.oid = sq.seq_reloid) ) AND (pgn.oid = pgc.relnamespace) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Log of Sequence updates
F-Key | Name | Type | Description |
---|---|---|---|
seql_seqid | integer |
Sequence ID |
|
seql_origin | integer |
Publisher node at which the sequence originates |
|
seql_ev_seqno | bigint |
Slony-I Event with which this sequence update is associated |
|
seql_last_value | bigint |
Last value published for this sequence |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Similar to sl_table, each entry identifies a sequence being replicated.
F-Key | Name | Type | Description |
---|---|---|---|
seq_id | integer |
PRIMARY KEY
An internally-used ID for Slony-I to use in its sequencing of updates |
|
seq_reloid | oid |
UNIQUE
NOT NULL
The OID of the sequence object |
|
seq_relname | name |
NOT NULL
The name of the sdequence in pg_catalog.pg_class.relname used to recover from a dump/restore cycle |
|
seq_nspname | name |
NOT NULL
The name of the schema in pg_catalog.pg_namespace.nspname used to recover from a dump/restore cycle |
|
_prod_replica_set.sl_set.set_id | seq_set | integer |
Indicates which replication set the object is in |
seq_comment | text |
A human-oriented comment |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds definitions of replication sets.
F-Key | Name | Type | Description |
---|---|---|---|
set_id | integer |
PRIMARY KEY
A unique ID number for the set. |
|
_prod_replica_set.sl_node.no_id | set_origin | integer |
The ID number of the source node for the replication set. |
set_locked | _prod_replica_set.xxid |
Indicates whether or not the set is locked. |
|
set_comment | text |
A human-oriented description of the set. |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
SYNC information
F-Key | Name | Type | Description |
---|---|---|---|
_prod_replica_set.sl_set.set_id | ssy_setid | integer |
PRIMARY KEY
ID number of the replication set |
_prod_replica_set.sl_node.no_id | ssy_origin | integer |
ID number of the node |
ssy_seqno | bigint |
Slony-I sequence number |
|
ssy_minxid | _prod_replica_set.xxid |
Earliest XID in provider system affected by SYNC |
|
ssy_maxxid | _prod_replica_set.xxid |
Latest XID in provider system affected by SYNC |
|
ssy_xip | text |
Contains the list of XIDs in progress at SYNC time |
|
ssy_action_list | text |
action list used during the subscription process. At the time a subscriber copies over data from the origin, it sees all tables in a state somewhere between two SYNC events. Therefore this list must contains all XIDs that are visible at that time, whose operations have therefore already been included in the data copied at the time the initial data copy is done. Those actions may therefore be filtered out of the first SYNC done after subscribing. |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
View showing how far behind remote nodes are.
F-Key | Name | Type | Description |
---|---|---|---|
st_origin | integer | ||
st_received | integer | ||
st_last_event | bigint | ||
st_last_event_ts | timestamp without time zone | ||
st_last_received | bigint | ||
st_last_received_ts | timestamp without time zone | ||
st_last_received_event_ts | timestamp without time zone | ||
st_lag_num_events | bigint | ||
st_lag_time | interval |
SELECT e.ev_origin AS st_origin , c.con_received AS st_received , e.ev_seqno AS st_last_event , e.ev_timestamp AS st_last_event_ts , c.con_seqno AS st_last_received , c.con_timestamp AS st_last_received_ts , ce.ev_timestamp AS st_last_received_event_ts , (e.ev_seqno - c.con_seqno) AS st_lag_num_events , (now () - (ce.ev_timestamp)::timestamp with time zone ) AS st_lag_time FROM _prod_replica_set.sl_event e , _prod_replica_set.sl_confirm c , _prod_replica_set.sl_event ce WHERE ( ( ( ( (e.ev_origin = c.con_origin) AND (ce.ev_origin = e.ev_origin) ) AND (ce.ev_seqno = c.con_seqno) ) AND ( (e.ev_origin , e.ev_seqno ) IN ( SELECT sl_event.ev_origin , max (sl_event.ev_seqno) AS max FROM _prod_replica_set.sl_event WHERE (sl_event.ev_origin = _prod_replica_set.getlocalnodeid ('_prod_replica_set'::name) ) GROUP BY sl_event.ev_origin ) ) ) AND ( (c.con_origin , c.con_received , c.con_seqno ) IN ( SELECT sl_confirm.con_origin , sl_confirm.con_received , max (sl_confirm.con_seqno) AS max FROM _prod_replica_set.sl_confirm WHERE (sl_confirm.con_origin = _prod_replica_set.getlocalnodeid ('_prod_replica_set'::name) ) GROUP BY sl_confirm.con_origin , sl_confirm.con_received ) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds a list of subscriptions on sets
F-Key | Name | Type | Description |
---|---|---|---|
_prod_replica_set.sl_set.set_id | sub_set | integer |
PRIMARY KEY
ID # (from sl_set) of the set being subscribed to |
_prod_replica_set.sl_path.pa_server#1 | sub_provider | integer |
ID# (from sl_node) of the node providing data |
_prod_replica_set.sl_path.pa_client#1 | sub_receiver | integer |
PRIMARY KEY
ID# (from sl_node) of the node receiving data from the provider |
sub_forward | boolean |
Does this provider keep data in sl_log_1/sl_log_2 to allow it to be a provider for other nodes? |
|
sub_active | boolean |
Has this subscription been activated? This is not set on the subscriber until AFTER the subscriber has received COPY data from the provider |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds information about the tables being replicated.
F-Key | Name | Type | Description |
---|---|---|---|
tab_id | integer |
PRIMARY KEY
Unique key for Slony-I to use to identify the table |
|
tab_reloid | oid |
UNIQUE
NOT NULL
The OID of the table in pg_catalog.pg_class.oid |
|
tab_relname | name |
NOT NULL
The name of the table in pg_catalog.pg_class.relname used to recover from a dump/restore cycle |
|
tab_nspname | name |
NOT NULL
The name of the schema in pg_catalog.pg_namespace.nspname used to recover from a dump/restore cycle |
|
_prod_replica_set.sl_set.set_id | tab_set | integer |
ID of the replication set the table is in |
tab_idxname | name |
NOT NULL
The name of the primary index of the table |
|
tab_altered | boolean |
NOT NULL
Has the table been modified for replication? |
|
tab_comment | text |
Human-oriented description of the table |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Holds information about triggers on tables managed using Slony-I
F-Key | Name | Type | Description |
---|---|---|---|
_prod_replica_set.sl_table.tab_id | trig_tabid | integer |
PRIMARY KEY
Slony-I ID number of table the trigger is on |
trig_tgname | name |
PRIMARY KEY
Indicates the name of a trigger |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
Index - Schema _prod_replica_set
Verify that a table is empty, and add it to replication. tab_idxname is optional - if NULL, then we use the primary key.
declare p_set_id alias for $1; p_tab_id alias for $2; p_nspname alias for $3; p_tabname alias for $4; p_idxname alias for $5; p_comment alias for $6; prec record; v_origin int4; v_isorigin boolean; v_fqname text; v_query text; v_rows integer; v_idxname text; begin -- Need to validate that the set exists; the set will tell us if this is the origin select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = p_set_id; if not found then raise exception 'add_empty_table_to_replication: set % not found!', p_set_id; end if; -- Need to be aware of whether or not this node is origin for the set v_isorigin := ( v_origin = "_prod_replica_set".getLocalNodeId('_prod_replica_set') ); v_fqname := '"' || p_nspname || '"."' || p_tabname || '"'; -- Take out a lock on the table v_query := 'lock ' || v_fqname || ';'; execute v_query; if v_isorigin then -- On the origin, verify that the table is empty, failing if it has any tuples v_query := 'select 1 as tuple from ' || v_fqname || ' limit 1;'; execute v_query into prec; GET DIAGNOSTICS v_rows = ROW_COUNT; if v_rows = 0 then raise notice 'add_empty_table_to_replication: table % empty on origin - OK', v_fqname; else raise exception 'add_empty_table_to_replication: table % contained tuples on origin node %', v_fqname, v_origin; end if; else -- On other nodes, TRUNCATE the table v_query := 'truncate ' || v_fqname || ';'; execute v_query; end if; -- If p_idxname is NULL, then look up the PK index, and RAISE EXCEPTION if one does not exist if p_idxname is NULL then select c2.relname into prec from pg_catalog.pg_index i, pg_catalog.pg_class c1, pg_catalog.pg_class c2, pg_catalog.pg_namespace n where i.indrelid = c1.oid and i.indexrelid = c2.oid and c1.relname = p_tabname and i.indisprimary and n.nspname = p_nspname and n.oid = c1.relnamespace; if not found then raise exception 'add_empty_table_to_replication: table % has no primary key and no candidate specified!', v_fqname; else v_idxname := prec.relname; end if; else v_idxname := p_idxname; end if; perform "_prod_replica_set".setAddTable_int(p_set_id, p_tab_id, v_fqname, v_idxname, p_comment); return "_prod_replica_set".alterTableRestore(p_tab_id); end
Add a column of a given type to a table if it is missing
DECLARE p_namespace alias for $1; p_table alias for $2; p_field alias for $3; p_type alias for $4; v_row record; v_query text; BEGIN select 1 into v_row from pg_namespace n, pg_class c, pg_attribute a where "_prod_replica_set".slon_quote_brute(n.nspname) = p_namespace and c.relnamespace = n.oid and "_prod_replica_set".slon_quote_brute(c.relname) = p_table and a.attrelid = c.oid and "_prod_replica_set".slon_quote_brute(a.attname) = p_field; if not found then raise notice 'Upgrade table %.% - add field %', p_namespace, p_table, p_field; v_query := 'alter table ' || p_namespace || '.' || p_table || ' add column '; v_query := v_query || p_field || ' ' || p_type || ';'; execute v_query; return 't'; else return 'f'; end if; END;
Add partial indexes, if possible, to the unused sl_log_? table for all origin nodes, and drop any that are no longer needed. This function presently gets run any time set origins are manipulated (FAILOVER, STORE SET, MOVE SET, DROP SET), as well as each time the system switches between sl_log_1 and sl_log_2.
DECLARE v_current_status int4; v_log int4; v_dummy record; v_dummy2 record; idef text; v_count int4; v_iname text; BEGIN v_count := 0; select last_value into v_current_status from "_prod_replica_set".sl_log_status; -- If status is 2 or 3 --> in process of cleanup --> unsafe to create indices if v_current_status in (2, 3) then return 0; end if; if v_current_status = 0 then -- Which log should get indices? v_log := 2; else v_log := 1; end if; -- PartInd_test_db_sl_log_2-node-1 -- Add missing indices... for v_dummy in select distinct set_origin from "_prod_replica_set".sl_set loop v_iname := 'PartInd_prod_replica_set_sl_log_' || v_log || '-node-' || v_dummy.set_origin; -- raise notice 'Consider adding partial index % on sl_log_%', v_iname, v_log; -- raise notice 'schema: [_prod_replica_set] tablename:[sl_log_%]', v_log; select * into v_dummy2 from pg_catalog.pg_indexes where tablename = 'sl_log_' || v_log and indexname = v_iname; if not found then -- raise notice 'index was not found - add it!'; idef := 'create index "PartInd_prod_replica_set_sl_log_' || v_log || '-node-' || v_dummy.set_origin || '" on "_prod_replica_set".sl_log_' || v_log || ' USING btree(log_xid "_prod_replica_set".xxid_ops) where (log_origin = ' || v_dummy.set_origin || ');'; execute idef; v_count := v_count + 1; else -- raise notice 'Index % already present - skipping', v_iname; end if; end loop; -- Remove unneeded indices... for v_dummy in select indexname from pg_catalog.pg_indexes i where i.tablename = 'sl_log_' || v_log and i.indexname like ('PartInd_prod_replica_set_sl_log_' || v_log || '-node-%') and not exists (select 1 from "_prod_replica_set".sl_set where i.indexname = 'PartInd_prod_replica_set_sl_log_' || v_log || '-node-' || set_origin) loop -- raise notice 'Dropping obsolete index %d', v_dummy.indexname; idef := 'drop index "_prod_replica_set"."' || v_dummy.indexname || '";'; execute idef; v_count := v_count - 1; end loop; return v_count; END
alterTableForReplication(tab_id) Sets up a table for replication. On the origin, this involves adding the "logTrigger()" trigger to the table. On a subscriber node, this involves disabling triggers and rules, and adding in the trigger that denies write access to replicated tables.
declare p_tab_id alias for $1; v_no_id int4; v_tab_row record; v_tab_fqname text; v_tab_attkind text; v_n int4; v_trec record; v_tgbad boolean; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get our local node ID -- ---- v_no_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); -- ---- -- Get the sl_table row and the current origin of the table. -- Verify that the table currently is NOT in altered state. -- ---- select T.tab_reloid, T.tab_set, T.tab_idxname, T.tab_altered, S.set_origin, PGX.indexrelid, "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) as tab_fqname into v_tab_row from "_prod_replica_set".sl_table T, "_prod_replica_set".sl_set S, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC where T.tab_id = p_tab_id and T.tab_set = S.set_id and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid and PGX.indrelid = T.tab_reloid and PGX.indexrelid = PGXC.oid and PGXC.relname = T.tab_idxname for update; if not found then raise exception 'Slony-I: alterTableForReplication(): Table with id % not found', p_tab_id; end if; v_tab_fqname = v_tab_row.tab_fqname; if v_tab_row.tab_altered then raise exception 'Slony-I: alterTableForReplication(): Table % is already in altered state', v_tab_fqname; end if; v_tab_attkind := "_prod_replica_set".determineAttKindUnique(v_tab_row.tab_fqname, v_tab_row.tab_idxname); execute 'lock table ' || v_tab_fqname || ' in access exclusive mode'; -- ---- -- Procedures are different on origin and subscriber -- ---- if v_no_id = v_tab_row.set_origin then -- ---- -- On the Origin we add the log trigger to the table and done -- ---- execute 'create trigger "_prod_replica_set_logtrigger_' || p_tab_id || '" after insert or update or delete on ' || v_tab_fqname || ' for each row execute procedure "_prod_replica_set".logTrigger (''_prod_replica_set'', ''' || p_tab_id || ''', ''' || v_tab_attkind || ''');'; else -- ---- -- On the subscriber the thing is a bit more difficult. We want -- to disable all user- and foreign key triggers and rules. -- ---- -- ---- -- Check to see if there are any trigger conflicts... -- ---- v_tgbad := 'false'; for v_trec in select pc.relname, tg1.tgname from "pg_catalog".pg_trigger tg1, "pg_catalog".pg_trigger tg2, "pg_catalog".pg_class pc, "pg_catalog".pg_index pi, "_prod_replica_set".sl_table tab where tg1.tgname = tg2.tgname and -- Trigger names match tg1.tgrelid = tab.tab_reloid and -- trigger 1 is on the table pi.indexrelid = tg2.tgrelid and -- trigger 2 is on the index pi.indrelid = tab.tab_reloid and -- indexes table is this table pc.oid = tab.tab_reloid loop raise notice 'Slony-I: alterTableForReplication(): multiple instances of trigger % on table %', v_trec.tgname, v_trec.relname; v_tgbad := 'true'; end loop; if v_tgbad then raise exception 'Slony-I: Unable to disable triggers'; end if; -- ---- -- Disable all existing triggers -- ---- update "pg_catalog".pg_trigger set tgrelid = v_tab_row.indexrelid where tgrelid = v_tab_row.tab_reloid and not exists ( select true from "_prod_replica_set".sl_table TAB, "_prod_replica_set".sl_trigger TRIG where TAB.tab_reloid = tgrelid and TAB.tab_id = TRIG.trig_tabid and TRIG.trig_tgname = tgname ); get diagnostics v_n = row_count; if v_n > 0 then update "pg_catalog".pg_class set reltriggers = reltriggers - v_n where oid = v_tab_row.tab_reloid; end if; -- ---- -- Disable all existing rules -- ---- update "pg_catalog".pg_rewrite set ev_class = v_tab_row.indexrelid where ev_class = v_tab_row.tab_reloid; get diagnostics v_n = row_count; if v_n > 0 then update "pg_catalog".pg_class set relhasrules = false where oid = v_tab_row.tab_reloid; end if; -- ---- -- Add the trigger that denies write access to replicated tables -- ---- execute 'create trigger "_prod_replica_set_denyaccess_' || p_tab_id || '" before insert or update or delete on ' || v_tab_fqname || ' for each row execute procedure "_prod_replica_set".denyAccess (''_prod_replica_set'');'; end if; -- ---- -- Mark the table altered in our configuration -- ---- update "_prod_replica_set".sl_table set tab_altered = true where tab_id = p_tab_id; return p_tab_id; end;
alterTableRestore (tab_id) Restores table tab_id from being replicated. On the origin, this simply involves dropping the "logtrigger" trigger. On subscriber nodes, this involves dropping the "denyaccess" trigger, and restoring user triggers and rules.
declare p_tab_id alias for $1; v_no_id int4; v_tab_row record; v_tab_fqname text; v_n int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get our local node ID -- ---- v_no_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); -- ---- -- Get the sl_table row and the current tables origin. Check -- that the table currently IS in altered state. -- ---- select T.tab_reloid, T.tab_set, T.tab_altered, S.set_origin, PGX.indexrelid, "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) as tab_fqname into v_tab_row from "_prod_replica_set".sl_table T, "_prod_replica_set".sl_set S, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC where T.tab_id = p_tab_id and T.tab_set = S.set_id and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid and PGX.indrelid = T.tab_reloid and PGX.indexrelid = PGXC.oid and PGXC.relname = T.tab_idxname for update; if not found then raise exception 'Slony-I: alterTableRestore(): Table with id % not found', p_tab_id; end if; v_tab_fqname = v_tab_row.tab_fqname; if not v_tab_row.tab_altered then raise exception 'Slony-I: alterTableRestore(): Table % is not in altered state', v_tab_fqname; end if; execute 'lock table ' || v_tab_fqname || ' in access exclusive mode'; -- ---- -- Procedures are different on origin and subscriber -- ---- if v_no_id = v_tab_row.set_origin then -- ---- -- On the Origin we just drop the trigger we originally added -- ---- execute 'drop trigger "_prod_replica_set_logtrigger_' || p_tab_id || '" on ' || v_tab_fqname; else -- ---- -- On the subscriber drop the denyAccess trigger -- ---- execute 'drop trigger "_prod_replica_set_denyaccess_' || p_tab_id || '" on ' || v_tab_fqname; -- ---- -- Restore all original triggers -- ---- update "pg_catalog".pg_trigger set tgrelid = v_tab_row.tab_reloid where tgrelid = v_tab_row.indexrelid; get diagnostics v_n = row_count; if v_n > 0 then update "pg_catalog".pg_class set reltriggers = reltriggers + v_n where oid = v_tab_row.tab_reloid; end if; -- ---- -- Restore all original rewrite rules -- ---- update "pg_catalog".pg_rewrite set ev_class = v_tab_row.tab_reloid where ev_class = v_tab_row.indexrelid; get diagnostics v_n = row_count; if v_n > 0 then update "pg_catalog".pg_class set relhasrules = true where oid = v_tab_row.tab_reloid; end if; end if; -- ---- -- Mark the table not altered in our configuration -- ---- update "_prod_replica_set".sl_table set tab_altered = false where tab_id = p_tab_id; return p_tab_id; end;
_Slony_I_btxxidcmp
Inline test function that verifies that slonik request for STORE NODE/INIT CLUSTER is being run against a conformant set of schema/functions.
declare moduleversion text; begin select into moduleversion "_prod_replica_set".getModuleVersion(); if moduleversion <> '1.2.12' then raise exception 'Slonik version: 1.2.12 != Slony-I version in PG build %', moduleversion; end if; return null; end;
cleaning old data out of sl_confirm, sl_event. Removes all but the last sl_confirm row per (origin,receiver), and then removes all events that are confirmed by all nodes in the whole cluster up to the last SYNC.
declare v_max_row record; v_min_row record; v_max_sync int8; begin -- ---- -- First remove all but the oldest confirm row per origin,receiver pair -- ---- delete from "_prod_replica_set".sl_confirm where con_origin not in (select no_id from "_prod_replica_set".sl_node); delete from "_prod_replica_set".sl_confirm where con_received not in (select no_id from "_prod_replica_set".sl_node); -- ---- -- Next remove all but the oldest confirm row per origin,receiver pair. -- Ignore confirmations that are younger than 10 minutes. We currently -- have an not confirmed suspicion that a possibly lost transaction due -- to a server crash might have been visible to another session, and -- that this led to log data that is needed again got removed. -- ---- for v_max_row in select con_origin, con_received, max(con_seqno) as con_seqno from "_prod_replica_set".sl_confirm where con_timestamp < (CURRENT_TIMESTAMP - '10 min'::interval) group by con_origin, con_received loop delete from "_prod_replica_set".sl_confirm where con_origin = v_max_row.con_origin and con_received = v_max_row.con_received and con_seqno < v_max_row.con_seqno; end loop; -- ---- -- Then remove all events that are confirmed by all nodes in the -- whole cluster up to the last SYNC -- ---- for v_min_row in select con_origin, min(con_seqno) as con_seqno from "_prod_replica_set".sl_confirm group by con_origin loop select coalesce(max(ev_seqno), 0) into v_max_sync from "_prod_replica_set".sl_event where ev_origin = v_min_row.con_origin and ev_seqno <= v_min_row.con_seqno and ev_type = 'SYNC'; if v_max_sync > 0 then delete from "_prod_replica_set".sl_event where ev_origin = v_min_row.con_origin and ev_seqno < v_max_sync; end if; end loop; -- ---- -- If cluster has only one node, then remove all events up to -- the last SYNC - Bug #1538 -- http://gborg.postgresql.org/project/slony1/bugs/bugupdate.php?1538 -- ---- select * into v_min_row from "_prod_replica_set".sl_node where no_id <> "_prod_replica_set".getLocalNodeId('_prod_replica_set') limit 1; if not found then select ev_origin, ev_seqno into v_min_row from "_prod_replica_set".sl_event where ev_origin = "_prod_replica_set".getLocalNodeId('_prod_replica_set') order by ev_origin desc, ev_seqno desc limit 1; raise notice 'Slony-I: cleanupEvent(): Single node - deleting events < %', v_min_row.ev_seqno; delete from "_prod_replica_set".sl_event where ev_origin = v_min_row.ev_origin and ev_seqno < v_min_row.ev_seqno; end if; if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_seqlog' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then execute 'alter table "_prod_replica_set".sl_seqlog set without oids;'; end if; -- ---- -- Also remove stale entries from the nodelock table. -- ---- perform "_prod_replica_set".cleanupNodelock(); return 0; end;
Clean up stale entries when restarting slon
declare v_row record; begin for v_row in select nl_nodeid, nl_conncnt, nl_backendpid from "_prod_replica_set".sl_nodelock for update loop if "_prod_replica_set".killBackend(v_row.nl_backendpid, 'NULL') < 0 then raise notice 'Slony-I: cleanup stale sl_nodelock entry for pid=%', v_row.nl_backendpid; delete from "_prod_replica_set".sl_nodelock where nl_nodeid = v_row.nl_nodeid and nl_conncnt = v_row.nl_conncnt; end if; end loop; return 0; end;
Return a string consisting of what should be appended to a COPY statement to specify fields for the passed-in tab_id. In PG versions > 7.3, this looks like (field1,field2,...fieldn)
declare result text; prefix text; prec record; begin result := ''; prefix := '('; -- Initially, prefix is the opening paren for prec in select "_prod_replica_set".slon_quote_input(a.attname) as column from "_prod_replica_set".sl_table t, pg_catalog.pg_attribute a where t.tab_id = $1 and t.tab_reloid = a.attrelid and a.attnum > 0 and a.attisdropped = false order by attnum loop result := result || prefix || prec.column; prefix := ','; -- Subsequently, prepend columns with commas end loop; result := result || ')'; return result; end;
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) Create an sl_event entry
_Slony_I_createEvent
ddlScript(set_id, script, only_on_node) Generates a SYNC event, runs the script on the origin, and then generates a DDL_SCRIPT event to request it to be run on replicated slaves.
declare p_set_id alias for $1; p_script alias for $2; p_only_on_node alias for $3; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that the set exists and originates here -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_origin <> "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', p_set_id; end if; -- ---- -- Create a SYNC event, run the script and generate the DDL_SCRIPT event -- ---- perform "_prod_replica_set".createEvent('_prod_replica_set', 'SYNC', NULL); perform "_prod_replica_set".ddlScript_int(p_set_id, p_script, p_only_on_node); perform "_prod_replica_set".updateRelname(p_set_id, p_only_on_node); return "_prod_replica_set".createEvent('_prod_replica_set', 'DDL_SCRIPT', p_set_id, p_script, p_only_on_node); end;
ddlScript_complete(set_id, script, only_on_node) After script has run on origin, this fixes up relnames, restores triggers, and generates a DDL_SCRIPT event to request it to be run on replicated slaves.
declare p_set_id alias for $1; p_script alias for $2; p_only_on_node alias for $3; v_set_origin int4; begin perform "_prod_replica_set".updateRelname(p_set_id, p_only_on_node); if p_only_on_node = -1 then perform "_prod_replica_set".alterTableForReplication(tab_id) from "_prod_replica_set".sl_table where tab_set in (select set_id from "_prod_replica_set".sl_set where set_origin = "_prod_replica_set".getLocalNodeId('_prod_replica_set')); return "_prod_replica_set".createEvent('_prod_replica_set', 'DDL_SCRIPT', p_set_id::text, p_script::text, p_only_on_node::text); else perform "_prod_replica_set".alterTableForReplication(tab_id) from "_prod_replica_set".sl_table; end if; return NULL; end;
ddlScript_complete_int(set_id, script, only_on_node) Complete processing the DDL_SCRIPT event. This puts tables back into replicated mode.
declare p_set_id alias for $1; p_only_on_node alias for $2; v_row record; begin -- ---- -- Put all tables back into replicated mode -- ---- for v_row in select * from "_prod_replica_set".sl_table loop perform "_prod_replica_set".alterTableForReplication(v_row.tab_id); end loop; return p_set_id; end;
ddlScript_int(set_id, script, only_on_node) Processes the DDL_SCRIPT event. On slave nodes, this restores original triggers/rules, runs the script, and then puts tables back into replicated mode.
declare p_set_id alias for $1; p_script alias for $2; p_only_on_node alias for $3; v_set_origin int4; v_no_id int4; v_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that we either are the set origin or a current -- subscriber of the set. -- ---- v_no_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_origin <> v_no_id and not exists (select 1 from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = v_no_id) then return 0; end if; -- ---- -- If execution on only one node is requested, check that -- we are that node. -- ---- if p_only_on_node > 0 and p_only_on_node <> v_no_id then return 0; end if; -- ---- -- Restore all original triggers and rules of all sets -- ---- for v_row in select * from "_prod_replica_set".sl_table loop perform "_prod_replica_set".alterTableRestore(v_row.tab_id); end loop; -- ---- -- Run the script -- ---- execute p_script; -- ---- -- Put all tables back into replicated mode -- ---- for v_row in select * from "_prod_replica_set".sl_table loop perform "_prod_replica_set".alterTableForReplication(v_row.tab_id); end loop; return p_set_id; end;
Prepare for DDL script execution on origin
declare p_set_id alias for $1; p_only_on_node alias for $2; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that the set exists and originates here -- unless only_on_node was specified (then it can be applied to -- that node because that is what the user wanted) -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if p_only_on_node = -1 then if v_set_origin <> "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', p_set_id; end if; -- ---- -- Create a SYNC event, run the script and generate the DDL_SCRIPT event -- ---- perform "_prod_replica_set".createEvent('_prod_replica_set', 'SYNC', NULL); perform "_prod_replica_set".alterTableRestore(tab_id) from "_prod_replica_set".sl_table where tab_set in (select set_id from "_prod_replica_set".sl_set where set_origin = "_prod_replica_set".getLocalNodeId('_prod_replica_set')); else -- ---- -- If doing "only on one node" - restore ALL tables irrespective of set -- ---- perform "_prod_replica_set".alterTableRestore(tab_id) from "_prod_replica_set".sl_table; end if; return 1; end;
ddlScript_prepare_int (set_id, only_on_node) Do preparatory work for a DDL script, restoring triggers/rules to original state.
declare p_set_id alias for $1; p_only_on_node alias for $2; v_set_origin int4; v_no_id int4; v_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that we either are the set origin or a current -- subscriber of the set. -- ---- v_no_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_origin <> v_no_id and not exists (select 1 from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = v_no_id) then return 0; end if; -- ---- -- If execution on only one node is requested, check that -- we are that node. -- ---- if p_only_on_node > 0 and p_only_on_node <> v_no_id then return 0; end if; -- ---- -- Restore all original triggers and rules of all sets -- ---- for v_row in select * from "_prod_replica_set".sl_table loop perform "_prod_replica_set".alterTableRestore(v_row.tab_id); end loop; return p_set_id; end;
Trigger function to prevent modifications to a table on a subscriber
_Slony_I_denyAccess
determineAttKindSerial (tab_fqname) A table was that was specified without a primary key is added to the replication. Assume that tableAddKey() was called before and finish the creation of the serial column. The return an attkind according to that.
declare p_tab_fqname alias for $1; v_tab_fqname_quoted text default ''; v_attkind text default ''; v_attrow record; v_have_serial bool default 'f'; begin v_tab_fqname_quoted := "_prod_replica_set".slon_quote_input(p_tab_fqname); -- -- Loop over the attributes of this relation -- and add a "v" for every user column, and a "k" -- if we find the Slony-I special serial column. -- for v_attrow in select PGA.attnum, PGA.attname from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_attribute PGA where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace and PGA.attrelid = PGC.oid and not PGA.attisdropped and PGA.attnum > 0 order by attnum loop if v_attrow.attname = '_Slony-I_prod_replica_set_rowID' then v_attkind := v_attkind || 'k'; v_have_serial := 't'; else v_attkind := v_attkind || 'v'; end if; end loop; -- -- A table must have at least one attribute, so not finding -- anything means the table does not exist. -- if not found then raise exception 'Slony-I: table % not found', v_tab_fqname_quoted; end if; -- -- If it does not have the special serial column, we -- should not have been called in the first place. -- if not v_have_serial then raise exception 'Slony-I: table % does not have the serial key', v_tab_fqname_quoted; end if; execute 'update ' || v_tab_fqname_quoted || ' set "_Slony-I_prod_replica_set_rowID" =' || ' "pg_catalog".nextval(''"_prod_replica_set".sl_rowid_seq'');'; execute 'alter table only ' || v_tab_fqname_quoted || ' add unique ("_Slony-I_prod_replica_set_rowID");'; execute 'alter table only ' || v_tab_fqname_quoted || ' alter column "_Slony-I_prod_replica_set_rowID" ' || ' set not null;'; -- -- Return the resulting Slony-I attkind -- return v_attkind; end;
determineAttKindUnique (tab_fqname, indexname) Given a tablename, return the Slony-I specific attkind (used for the log trigger) of the table. Use the specified unique index or the primary key (if indexname is NULL).
declare p_tab_fqname alias for $1; v_tab_fqname_quoted text default ''; p_idx_name alias for $2; v_idx_name_quoted text; v_idxrow record; v_attrow record; v_i integer; v_attno int2; v_attkind text default ''; v_attfound bool; begin v_tab_fqname_quoted := "_prod_replica_set".slon_quote_input(p_tab_fqname); v_idx_name_quoted := "_prod_replica_set".slon_quote_brute(p_idx_name); -- -- Ensure that the table exists -- if (select PGC.relname from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace) is null then raise exception 'Slony-I: table % not found', v_tab_fqname_quoted; end if; -- -- Lookup the tables primary key or the specified unique index -- if p_idx_name isnull then raise exception 'Slony-I: index name must be specified'; else select PGXC.relname, PGX.indexrelid, PGX.indkey into v_idxrow from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace and PGX.indrelid = PGC.oid and PGX.indexrelid = PGXC.oid and PGX.indisunique and "_prod_replica_set".slon_quote_brute(PGXC.relname) = v_idx_name_quoted; if not found then raise exception 'Slony-I: table % has no unique index %', v_tab_fqname_quoted, v_idx_name_quoted; end if; end if; -- -- Loop over the tables attributes and check if they are -- index attributes. If so, add a "k" to the return value, -- otherwise add a "v". -- for v_attrow in select PGA.attnum, PGA.attname from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_attribute PGA where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace and PGA.attrelid = PGC.oid and not PGA.attisdropped and PGA.attnum > 0 order by attnum loop v_attfound = 'f'; v_i := 0; loop select indkey[v_i] into v_attno from "pg_catalog".pg_index where indexrelid = v_idxrow.indexrelid; if v_attno isnull or v_attno = 0 then exit; end if; if v_attrow.attnum = v_attno then v_attfound = 't'; exit; end if; v_i := v_i + 1; end loop; if v_attfound then v_attkind := v_attkind || 'k'; else v_attkind := v_attkind || 'v'; end if; end loop; -- -- Return the resulting attkind -- return v_attkind; end;
determineIdxnameSerial (tab_fqname) Given a tablename, construct the index name of the serial column.
declare p_tab_fqname alias for $1; v_tab_fqname_quoted text default ''; v_row record; begin v_tab_fqname_quoted := "_prod_replica_set".slon_quote_input(p_tab_fqname); -- -- Lookup the table name alone -- select PGC.relname into v_row from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace; if not found then raise exception 'Slony-I: table % not found', v_tab_fqname_quoted; end if; -- -- Return the found index name -- return v_row.relname || '__Slony-I_prod_replica_set_rowID_key'; end;
FUNCTION determineIdxnameUnique (tab_fqname, indexname) Given a tablename, tab_fqname, check that the unique index, indexname, exists or return the primary key index name for the table. If there is no unique index, it raises an exception.
declare p_tab_fqname alias for $1; v_tab_fqname_quoted text default ''; p_idx_name alias for $2; v_idxrow record; begin v_tab_fqname_quoted := "_prod_replica_set".slon_quote_input(p_tab_fqname); -- -- Ensure that the table exists -- if (select PGC.relname from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace) is null then raise exception 'Slony-I: determineIdxnameUnique(): table % not found', v_tab_fqname_quoted; end if; -- -- Lookup the tables primary key or the specified unique index -- if p_idx_name isnull then select PGXC.relname into v_idxrow from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace and PGX.indrelid = PGC.oid and PGX.indexrelid = PGXC.oid and PGX.indisprimary; if not found then raise exception 'Slony-I: table % has no primary key', v_tab_fqname_quoted; end if; else select PGXC.relname into v_idxrow from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace and PGX.indrelid = PGC.oid and PGX.indexrelid = PGXC.oid and PGX.indisunique and "_prod_replica_set".slon_quote_brute(PGXC.relname) = "_prod_replica_set".slon_quote_input(p_idx_name); if not found then raise exception 'Slony-I: table % has no unique index %', v_tab_fqname_quoted, p_idx_name; end if; end if; -- -- Return the found index name -- return v_idxrow.relname; end;
process DISABLE_NODE event for node no_id NOTE: This is not yet implemented!
declare p_no_id alias for $1; begin -- **** TODO **** raise exception 'Slony-I: disableNode() not implemented'; end;
declare p_no_id alias for $1; begin -- **** TODO **** raise exception 'Slony-I: disableNode_int() not implemented'; end;
dropListen (li_origin, li_provider, li_receiver) Generate the DROP_LISTEN event.
declare p_li_origin alias for $1; p_li_provider alias for $2; p_li_receiver alias for $3; begin perform "_prod_replica_set".dropListen_int(p_li_origin, p_li_provider, p_li_receiver); return "_prod_replica_set".createEvent ('_prod_replica_set', 'DROP_LISTEN', p_li_origin::text, p_li_provider::text, p_li_receiver::text); end;
dropListen (li_origin, li_provider, li_receiver) Process the DROP_LISTEN event, deleting the sl_listen entry for the indicated (origin,provider,receiver) combination.
declare p_li_origin alias for $1; p_li_provider alias for $2; p_li_receiver alias for $3; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; delete from "_prod_replica_set".sl_listen where li_origin = p_li_origin and li_provider = p_li_provider and li_receiver = p_li_receiver; if found then return 1; else return 0; end if; end;
generate DROP_NODE event to drop node node_id from replication
declare p_no_id alias for $1; v_node_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that this got called on a different node -- ---- if p_no_id = "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: DROP_NODE cannot initiate on the dropped node'; end if; select * into v_node_row from "_prod_replica_set".sl_node where no_id = p_no_id for update; if not found then raise exception 'Slony-I: unknown node ID %', p_no_id; end if; -- ---- -- Make sure we do not break other nodes subscriptions with this -- ---- if exists (select true from "_prod_replica_set".sl_subscribe where sub_provider = p_no_id) then raise exception 'Slony-I: Node % is still configured as a data provider', p_no_id; end if; -- ---- -- Make sure no set originates there any more -- ---- if exists (select true from "_prod_replica_set".sl_set where set_origin = p_no_id) then raise exception 'Slony-I: Node % is still origin of one or more sets', p_no_id; end if; -- ---- -- Call the internal drop functionality and generate the event -- ---- perform "_prod_replica_set".dropNode_int(p_no_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'DROP_NODE', p_no_id::text); end;
internal function to process DROP_NODE event to drop node node_id from replication
declare p_no_id alias for $1; v_tab_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- If the dropped node is a remote node, clean the configuration -- from all traces for it. -- ---- if p_no_id <> "_prod_replica_set".getLocalNodeId('_prod_replica_set') then delete from "_prod_replica_set".sl_subscribe where sub_receiver = p_no_id; delete from "_prod_replica_set".sl_listen where li_origin = p_no_id or li_provider = p_no_id or li_receiver = p_no_id; delete from "_prod_replica_set".sl_path where pa_server = p_no_id or pa_client = p_no_id; delete from "_prod_replica_set".sl_confirm where con_origin = p_no_id or con_received = p_no_id; delete from "_prod_replica_set".sl_event where ev_origin = p_no_id; delete from "_prod_replica_set".sl_node where no_id = p_no_id; return p_no_id; end if; -- ---- -- This is us ... deactivate the node for now, the daemon -- will call uninstallNode() in a separate transaction. -- ---- update "_prod_replica_set".sl_node set no_active = false where no_id = p_no_id; -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); return p_no_id; end;
Generate DROP_PATH event to drop path from pa_server to pa_client
declare p_pa_server alias for $1; p_pa_client alias for $2; v_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- There should be no existing subscriptions. Auto unsubscribing -- is considered too dangerous. -- ---- for v_row in select sub_set, sub_provider, sub_receiver from "_prod_replica_set".sl_subscribe where sub_provider = p_pa_server and sub_receiver = p_pa_client loop raise exception 'Slony-I: Path cannot be dropped, subscription of set % needs it', v_row.sub_set; end loop; -- ---- -- Drop all sl_listen entries that depend on this path -- ---- for v_row in select li_origin, li_provider, li_receiver from "_prod_replica_set".sl_listen where li_provider = p_pa_server and li_receiver = p_pa_client loop perform "_prod_replica_set".dropListen( v_row.li_origin, v_row.li_provider, v_row.li_receiver); end loop; -- ---- -- Now drop the path and create the event -- ---- perform "_prod_replica_set".dropPath_int(p_pa_server, p_pa_client); -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); return "_prod_replica_set".createEvent ('_prod_replica_set', 'DROP_PATH', p_pa_server::text, p_pa_client::text); end;
Process DROP_PATH event to drop path from pa_server to pa_client
declare p_pa_server alias for $1; p_pa_client alias for $2; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Remove any dangling sl_listen entries with the server -- as provider and the client as receiver. This must have -- been cleared out before, but obviously was not. -- ---- delete from "_prod_replica_set".sl_listen where li_provider = p_pa_server and li_receiver = p_pa_client; delete from "_prod_replica_set".sl_path where pa_server = p_pa_server and pa_client = p_pa_client; if found then -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); return 1; else -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); return 0; end if; end;
Process DROP_SET event to drop replication of set set_id. This involves: - Restoring original triggers and rules - Removing all traces of the set configuration, including sequences, tables, subscribers, syncs, and the set itself
declare p_set_id alias for $1; v_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that the set exists and originates here -- ---- select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = p_set_id; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', p_set_id; end if; -- ---- -- Call the internal drop set functionality and generate the event -- ---- perform "_prod_replica_set".dropSet_int(p_set_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'DROP_SET', p_set_id::text); end;
declare p_set_id alias for $1; v_tab_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Restore all tables original triggers and rules and remove -- our replication stuff. -- ---- for v_tab_row in select tab_id from "_prod_replica_set".sl_table where tab_set = p_set_id order by tab_id loop perform "_prod_replica_set".alterTableRestore(v_tab_row.tab_id); perform "_prod_replica_set".tableDropKey(v_tab_row.tab_id); end loop; -- ---- -- Remove all traces of the set configuration -- ---- delete from "_prod_replica_set".sl_sequence where seq_set = p_set_id; delete from "_prod_replica_set".sl_table where tab_set = p_set_id; delete from "_prod_replica_set".sl_subscribe where sub_set = p_set_id; delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; delete from "_prod_replica_set".sl_set where set_id = p_set_id; -- Regenerate sl_listen since we revised the subscriptions perform "_prod_replica_set".RebuildListenEntries(); -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table perform "_prod_replica_set".addPartialLogIndices(); return p_set_id; end;
dropTrigger (trig_tabid, trig_tgname) Submits DROP_TRIGGER event to indicate that trigger trig_tgname on replicated table trig_tabid WILL be disabled.
declare p_trig_tabid alias for $1; p_trig_tgname alias for $2; begin perform "_prod_replica_set".dropTrigger_int(p_trig_tabid, p_trig_tgname); return "_prod_replica_set".createEvent('_prod_replica_set', 'DROP_TRIGGER', p_trig_tabid::text, p_trig_tgname::text); end;
dropTrigger_int (trig_tabid, trig_tgname) Processes DROP_TRIGGER event to make sure that trigger trig_tgname on replicated table trig_tabid IS disabled.
declare p_trig_tabid alias for $1; p_trig_tgname alias for $2; v_tab_altered boolean; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get the current table status (altered or not) -- ---- select tab_altered into v_tab_altered from "_prod_replica_set".sl_table where tab_id = p_trig_tabid; if not found then -- ---- -- Not found is no hard error here, because that might -- mean that we are not subscribed to that set -- ---- return 0; end if; -- ---- -- If the table is modified for replication, restore the original state -- ---- if v_tab_altered then perform "_prod_replica_set".alterTableRestore(p_trig_tabid); end if; -- ---- -- Remove the entry from sl_trigger -- ---- delete from "_prod_replica_set".sl_trigger where trig_tabid = p_trig_tabid and trig_tgname = p_trig_tgname; -- ---- -- Put the table back into replicated state if it was -- ---- if v_tab_altered then perform "_prod_replica_set".alterTableForReplication(p_trig_tabid); end if; return p_trig_tabid; end;
no_id - Node ID # Generate the ENABLE_NODE event for node no_id
declare p_no_id alias for $1; v_local_node_id int4; v_node_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that we are the node to activate and that we are -- currently disabled. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select * into v_node_row from "_prod_replica_set".sl_node where no_id = p_no_id for update; if not found then raise exception 'Slony-I: node % not found', p_no_id; end if; if v_node_row.no_active then raise exception 'Slony-I: node % is already active', p_no_id; end if; -- ---- -- Activate this node and generate the ENABLE_NODE event -- ---- perform "_prod_replica_set".enableNode_int (p_no_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'ENABLE_NODE', p_no_id::text); end;
no_id - Node ID # Internal function to process the ENABLE_NODE event for node no_id
declare p_no_id alias for $1; v_local_node_id int4; v_node_row record; v_sub_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that the node is inactive -- ---- select * into v_node_row from "_prod_replica_set".sl_node where no_id = p_no_id for update; if not found then raise exception 'Slony-I: node % not found', p_no_id; end if; if v_node_row.no_active then return p_no_id; end if; -- ---- -- Activate the node and generate sl_confirm status rows for it. -- ---- update "_prod_replica_set".sl_node set no_active = 't' where no_id = p_no_id; insert into "_prod_replica_set".sl_confirm (con_origin, con_received, con_seqno) select no_id, p_no_id, 0 from "_prod_replica_set".sl_node where no_id != p_no_id and no_active; insert into "_prod_replica_set".sl_confirm (con_origin, con_received, con_seqno) select p_no_id, no_id, 0 from "_prod_replica_set".sl_node where no_id != p_no_id and no_active; -- ---- -- Generate ENABLE_SUBSCRIPTION events for all sets that -- origin here and are subscribed by the just enabled node. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); for v_sub_row in select SUB.sub_set, SUB.sub_provider from "_prod_replica_set".sl_set S, "_prod_replica_set".sl_subscribe SUB where S.set_origin = v_local_node_id and S.set_id = SUB.sub_set and SUB.sub_receiver = p_no_id for update of S loop perform "_prod_replica_set".enableSubscription (v_sub_row.sub_set, v_sub_row.sub_provider, p_no_id); end loop; return p_no_id; end;
enableSubscription (sub_set, sub_provider, sub_receiver) Indicates that sub_receiver intends subscribing to set sub_set from sub_provider. Work is all done by the internal function enableSubscription_int (sub_set, sub_provider, sub_receiver).
declare p_sub_set alias for $1; p_sub_provider alias for $2; p_sub_receiver alias for $3; begin return "_prod_replica_set".enableSubscription_int (p_sub_set, p_sub_provider, p_sub_receiver); end;
enableSubscription_int (sub_set, sub_provider, sub_receiver) Internal function to enable subscription of node sub_receiver to set sub_set via node sub_provider. slon does most of the work; all we need do here is to remember that it happened. The function updates sl_subscribe, indicating that the subscription has become active.
declare p_sub_set alias for $1; p_sub_provider alias for $2; p_sub_receiver alias for $3; v_n int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- The real work is done in the replication engine. All -- we have to do here is remembering that it happened. -- ---- -- ---- -- Well, not only ... we might be missing an important event here -- ---- if not exists (select true from "_prod_replica_set".sl_path where pa_server = p_sub_provider and pa_client = p_sub_receiver) then insert into "_prod_replica_set".sl_path (pa_server, pa_client, pa_conninfo, pa_connretry) values (p_sub_provider, p_sub_receiver, '<event pending>', 10); end if; update "_prod_replica_set".sl_subscribe set sub_active = 't' where sub_set = p_sub_set and sub_receiver = p_sub_receiver; get diagnostics v_n = row_count; if v_n = 0 then insert into "_prod_replica_set".sl_subscribe (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) values (p_sub_set, p_sub_provider, p_sub_receiver, false, true); end if; -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); return p_sub_set; end;
Initiate failover from failed_node to backup_node. This function must be called on all nodes, and then waited for the restart of all node daemons.
declare p_failed_node alias for $1; p_backup_node alias for $2; v_row record; v_row2 record; v_n int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- All consistency checks first -- Check that every node that has a path to the failed node -- also has a path to the backup node. -- ---- for v_row in select P.pa_client from "_prod_replica_set".sl_path P where P.pa_server = p_failed_node and P.pa_client <> p_backup_node and not exists (select true from "_prod_replica_set".sl_path PP where PP.pa_server = p_backup_node and PP.pa_client = P.pa_client) loop raise exception 'Slony-I: cannot failover - node % has no path to the backup node', v_row.pa_client; end loop; -- ---- -- Check all sets originating on the failed node -- ---- for v_row in select set_id from "_prod_replica_set".sl_set where set_origin = p_failed_node loop -- ---- -- Check that the backup node is subscribed to all sets -- that originate on the failed node -- ---- select into v_row2 sub_forward, sub_active from "_prod_replica_set".sl_subscribe where sub_set = v_row.set_id and sub_receiver = p_backup_node; if not found then raise exception 'Slony-I: cannot failover - node % is not subscribed to set %', p_backup_node, v_row.set_id; end if; -- ---- -- Check that the subscription is active -- ---- if not v_row2.sub_active then raise exception 'Slony-I: cannot failover - subscription for set % is not active', v_row.set_id; end if; -- ---- -- If there are other subscribers, the backup node needs to -- be a forwarder too. -- ---- select into v_n count(*) from "_prod_replica_set".sl_subscribe where sub_set = v_row.set_id and sub_receiver <> p_backup_node; if v_n > 0 and not v_row2.sub_forward then raise exception 'Slony-I: cannot failover - node % is not a forwarder of set %', p_backup_node, v_row.set_id; end if; end loop; -- ---- -- Terminate all connections of the failed node the hard way -- ---- perform "_prod_replica_set".terminateNodeConnections(p_failed_node); -- ---- -- Move the sets -- ---- for v_row in select S.set_id, (select count(*) from "_prod_replica_set".sl_subscribe SUB where S.set_id = SUB.sub_set and SUB.sub_receiver <> p_backup_node and SUB.sub_provider = p_failed_node) as num_direct_receivers from "_prod_replica_set".sl_set S where S.set_origin = p_failed_node for update loop -- ---- -- If the backup node is the only direct subscriber ... -- ---- if v_row.num_direct_receivers = 0 then raise notice 'failedNode: set % has no other direct receivers - move now', v_row.set_id; -- ---- -- backup_node is the only direct subscriber, move the set -- right now. On the backup node itself that includes restoring -- all user mode triggers, removing the protection trigger, -- adding the log trigger, removing the subscription and the -- obsolete setsync status. -- ---- if p_backup_node = "_prod_replica_set".getLocalNodeId('_prod_replica_set') then for v_row2 in select * from "_prod_replica_set".sl_table where tab_set = v_row.set_id loop perform "_prod_replica_set".alterTableRestore(v_row2.tab_id); end loop; update "_prod_replica_set".sl_set set set_origin = p_backup_node where set_id = v_row.set_id; delete from "_prod_replica_set".sl_setsync where ssy_setid = v_row.set_id; for v_row2 in select * from "_prod_replica_set".sl_table where tab_set = v_row.set_id loop perform "_prod_replica_set".alterTableForReplication(v_row2.tab_id); end loop; end if; delete from "_prod_replica_set".sl_subscribe where sub_set = v_row.set_id and sub_receiver = p_backup_node; else raise notice 'failedNode: set % has other direct receivers - change providers only', v_row.set_id; -- ---- -- Backup node is not the only direct subscriber. This -- means that at this moment, we redirect all direct -- subscribers to receive from the backup node, and the -- backup node itself to receive from another one. -- The admin utility will wait for the slon engine to -- restart and then call failedNode2() on the node with -- the highest SYNC and redirect this to it on -- backup node later. -- ---- update "_prod_replica_set".sl_subscribe set sub_provider = (select min(SS.sub_receiver) from "_prod_replica_set".sl_subscribe SS where SS.sub_set = v_row.set_id and SS.sub_provider = p_failed_node and SS.sub_receiver <> p_backup_node and SS.sub_forward) where sub_set = v_row.set_id and sub_receiver = p_backup_node; update "_prod_replica_set".sl_subscribe set sub_provider = p_backup_node where sub_set = v_row.set_id and sub_provider = p_failed_node and sub_receiver <> p_backup_node; end if; end loop; -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table perform "_prod_replica_set".addPartialLogIndices(); -- ---- -- Make sure the node daemon will restart -- ---- notify "_prod_replica_set_Restart"; -- ---- -- That is it - so far. -- ---- return p_failed_node; end;
FUNCTION failedNode2 (failed_node, backup_node, set_id, ev_seqno, ev_seqfake) On the node that has the highest sequence number of the failed node, fake the FAILOVER_SET event.
declare p_failed_node alias for $1; p_backup_node alias for $2; p_set_id alias for $3; p_ev_seqno alias for $4; p_ev_seqfake alias for $5; v_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; select * into v_row from "_prod_replica_set".sl_event where ev_origin = p_failed_node and ev_seqno = p_ev_seqno; if not found then raise exception 'Slony-I: event %,% not found', p_failed_node, p_ev_seqno; end if; insert into "_prod_replica_set".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3) values (p_failed_node, p_ev_seqfake, CURRENT_TIMESTAMP, v_row.ev_minxid, v_row.ev_maxxid, v_row.ev_xip, 'FAILOVER_SET', p_failed_node::text, p_backup_node::text, p_set_id::text); insert into "_prod_replica_set".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (p_failed_node, "_prod_replica_set".getLocalNodeId('_prod_replica_set'), p_ev_seqfake, CURRENT_TIMESTAMP); notify "_prod_replica_set_Event"; notify "_prod_replica_set_Confirm"; notify "_prod_replica_set_Restart"; perform "_prod_replica_set".failoverSet_int(p_failed_node, p_backup_node, p_set_id, p_ev_seqfake); return p_ev_seqfake; end;
FUNCTION failoverSet_int (failed_node, backup_node, set_id) Finish failover for one set.
declare p_failed_node alias for $1; p_backup_node alias for $2; p_set_id alias for $3; v_row record; v_last_sync int8; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Change the origin of the set now to the backup node. -- On the backup node this includes changing all the -- trigger and protection stuff -- ---- if p_backup_node = "_prod_replica_set".getLocalNodeId('_prod_replica_set') then for v_row in select * from "_prod_replica_set".sl_table where tab_set = p_set_id loop perform "_prod_replica_set".alterTableRestore(v_row.tab_id); end loop; delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; delete from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_backup_node; update "_prod_replica_set".sl_set set set_origin = p_backup_node where set_id = p_set_id; for v_row in select * from "_prod_replica_set".sl_table where tab_set = p_set_id loop perform "_prod_replica_set".alterTableForReplication(v_row.tab_id); end loop; insert into "_prod_replica_set".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3) values (p_backup_node, "pg_catalog".nextval('"_prod_replica_set".sl_event_seq'), CURRENT_TIMESTAMP, '0', '0', '', 'ACCEPT_SET', p_set_id::text, p_failed_node::text, p_backup_node::text); else delete from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_backup_node; update "_prod_replica_set".sl_set set set_origin = p_backup_node where set_id = p_set_id; end if; -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); -- ---- -- If we are a subscriber of the set ourself, change our -- setsync status to reflect the new set origin. -- ---- if exists (select true from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = "_prod_replica_set".getLocalNodeId( '_prod_replica_set')) then delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; select coalesce(max(ev_seqno), 0) into v_last_sync from "_prod_replica_set".sl_event where ev_origin = p_backup_node and ev_type = 'SYNC'; if v_last_sync > 0 then insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) select p_set_id, p_backup_node, v_last_sync, ev_minxid, ev_maxxid, ev_xip, NULL from "_prod_replica_set".sl_event where ev_origin = p_backup_node and ev_seqno = v_last_sync; else insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) values (p_set_id, p_backup_node, '0', '0', '0', '', NULL); end if; end if; return p_failed_node; end;
FUNCTION failoverSet_int (failed_node, backup_node, set_id, wait_seqno) Finish failover for one set.
declare p_failed_node alias for $1; p_backup_node alias for $2; p_set_id alias for $3; p_wait_seqno alias for $4; v_row record; v_last_sync int8; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Change the origin of the set now to the backup node. -- On the backup node this includes changing all the -- trigger and protection stuff -- ---- if p_backup_node = "_prod_replica_set".getLocalNodeId('_prod_replica_set') then for v_row in select * from "_prod_replica_set".sl_table where tab_set = p_set_id loop perform "_prod_replica_set".alterTableRestore(v_row.tab_id); end loop; delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; delete from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_backup_node; update "_prod_replica_set".sl_set set set_origin = p_backup_node where set_id = p_set_id; for v_row in select * from "_prod_replica_set".sl_table where tab_set = p_set_id loop perform "_prod_replica_set".alterTableForReplication(v_row.tab_id); end loop; insert into "_prod_replica_set".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type, ev_data1, ev_data2, ev_data3, ev_data4) values (p_backup_node, "pg_catalog".nextval('"_prod_replica_set".sl_event_seq'), CURRENT_TIMESTAMP, '0', '0', '', 'ACCEPT_SET', p_set_id::text, p_failed_node::text, p_backup_node::text, p_wait_seqno::text); else delete from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_backup_node; update "_prod_replica_set".sl_set set set_origin = p_backup_node where set_id = p_set_id; end if; -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); -- ---- -- If we are a subscriber of the set ourself, change our -- setsync status to reflect the new set origin. -- ---- if exists (select true from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = "_prod_replica_set".getLocalNodeId( '_prod_replica_set')) then delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; select coalesce(max(ev_seqno), 0) into v_last_sync from "_prod_replica_set".sl_event where ev_origin = p_backup_node and ev_type = 'SYNC'; if v_last_sync > 0 then insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) select p_set_id, p_backup_node, v_last_sync, ev_minxid, ev_maxxid, ev_xip, NULL from "_prod_replica_set".sl_event where ev_origin = p_backup_node and ev_seqno = v_last_sync; else insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) values (p_set_id, p_backup_node, '0', '0', '0', '', NULL); end if; end if; return p_failed_node; end;
Reenable index maintenance and reindex the table
declare p_tab_id alias for $1; v_tab_oid oid; v_tab_fqname text; begin -- ---- -- Get the tables OID and fully qualified name -- --- select PGC.oid, "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) as tab_fqname into v_tab_oid, v_tab_fqname from "_prod_replica_set".sl_table T, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where T.tab_id = p_tab_id and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then raise exception 'Table with ID % not found in sl_table', p_tab_id; end if; -- ---- -- Reenable indexes and reindex the table. -- ---- update pg_class set relhasindex = 't' where oid = v_tab_oid; execute 'reindex table ' || "_prod_replica_set".slon_quote_input(v_tab_fqname); return 1; end;
forwardConfirm (p_con_origin, p_con_received, p_con_seqno, p_con_timestamp) Confirms (recorded in sl_confirm) that items from p_con_origin up to p_con_seqno have been received by node p_con_received as of p_con_timestamp, and raises an event to forward this confirmation.
declare p_con_origin alias for $1; p_con_received alias for $2; p_con_seqno alias for $3; p_con_timestamp alias for $4; v_max_seqno bigint; begin select into v_max_seqno coalesce(max(con_seqno), 0) from "_prod_replica_set".sl_confirm where con_origin = p_con_origin and con_received = p_con_received; if v_max_seqno < p_con_seqno then insert into "_prod_replica_set".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (p_con_origin, p_con_received, p_con_seqno, p_con_timestamp); notify "_prod_replica_set_Confirm"; v_max_seqno = p_con_seqno; end if; return v_max_seqno; end;
Generate a sync event if there has not been one in the requested interval.
declare p_interval alias for $1; v_node_row record; BEGIN select 1 into v_node_row from "_prod_replica_set".sl_event where ev_type = 'SYNC' and ev_origin = "_prod_replica_set".getLocalNodeId('_prod_replica_set') and ev_timestamp > now() - p_interval limit 1; if not found then -- If there has been no SYNC in the last interval, then push one perform "_prod_replica_set".createEvent('_prod_replica_set', 'SYNC', NULL); return 1; else return 0; end if; end;
_Slony_I_getCurrentXid
Returns the node ID of the node being serviced on the local database
_Slony_I_getLocalNodeId
_Slony_I_getMaxXid
_Slony_I_getMinXid
Returns the compiled-in version number of the Slony-I shared object
_Slony_I_getModuleVersion
not yet documented
_Slony_I_getSessionRole
no_id - Node ID # no_comment - Human-oriented comment Initializes the new node, no_id
declare p_local_node_id alias for $1; p_comment alias for $2; v_old_node_id int4; v_first_log_no int4; v_event_seq int8; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Make sure this node is uninitialized or got reset -- ---- select last_value::int4 into v_old_node_id from "_prod_replica_set".sl_local_node_id; if v_old_node_id != -1 then raise exception 'Slony-I: This node is already initialized'; end if; -- ---- -- Set sl_local_node_id to the requested value and add our -- own system to sl_node. -- ---- perform setval('"_prod_replica_set".sl_local_node_id', p_local_node_id); perform setval('"_prod_replica_set".sl_rowid_seq', p_local_node_id::int8 * '1000000000000000'::int8); perform "_prod_replica_set".storeNode_int (p_local_node_id, p_comment, false); return p_local_node_id; end;
Send a signal to a postgres process. Requires superuser rights
_Slony_I_killBackend
Trigger function to prevent modifications to a table before and after a moveSet()
_Slony_I_lockedSet
lockSet(set_id) Add a special trigger to all tables of a set that disables access to it.
declare p_set_id alias for $1; v_local_node_id int4; v_set_row record; v_tab_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that the set exists and that we are the origin -- and that it is not already locked. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select * into v_set_row from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_row.set_origin <> v_local_node_id then raise exception 'Slony-I: set % does not originate on local node', p_set_id; end if; if v_set_row.set_locked notnull then raise exception 'Slony-I: set % is already locked', p_set_id; end if; -- ---- -- Place the lockedSet trigger on all tables in the set. -- ---- for v_tab_row in select T.tab_id, "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) as tab_fqname from "_prod_replica_set".sl_table T, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where T.tab_set = p_set_id and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid order by tab_id loop execute 'create trigger "_prod_replica_set_lockedset_' || v_tab_row.tab_id || '" before insert or update or delete on ' || v_tab_row.tab_fqname || ' for each row execute procedure "_prod_replica_set".lockedSet (''_prod_replica_set'');'; end loop; -- ---- -- Remember our snapshots xmax as for the set locking -- ---- update "_prod_replica_set".sl_set set set_locked = "_prod_replica_set".getMaxXid() where set_id = p_set_id; return p_set_id; end;
logswitch_finish() Attempt to finalize a log table switch in progress
DECLARE v_current_status int4; v_dummy record; BEGIN -- ---- -- Grab the central configuration lock to prevent race conditions -- while changing the sl_log_status sequence value. -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get the current log status. -- ---- select last_value into v_current_status from "_prod_replica_set".sl_log_status; -- ---- -- status value 0 or 1 means that there is no log switch in progress -- ---- if v_current_status = 0 or v_current_status = 1 then return 0; end if; -- ---- -- status = 2: sl_log_1 active, cleanup sl_log_2 -- ---- if v_current_status = 2 then -- ---- -- The cleanup thread calls us after it did the delete and -- vacuum of both log tables. If sl_log_2 is empty now, we -- can truncate it and the log switch is done. -- ---- for v_dummy in select 1 from "_prod_replica_set".sl_log_2 loop -- ---- -- Found a row ... log switch is still in progress. -- ---- raise notice 'Slony-I: log switch to sl_log_1 still in progress - sl_log_2 not truncated'; return -1; end loop; raise notice 'Slony-I: log switch to sl_log_1 complete - truncate sl_log_2'; truncate "_prod_replica_set".sl_log_2; if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_log_2' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then execute 'alter table "_prod_replica_set".sl_log_2 set without oids;'; end if; perform "pg_catalog".setval('"_prod_replica_set".sl_log_status', 0); -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table perform "_prod_replica_set".addPartialLogIndices(); return 1; end if; -- ---- -- status = 3: sl_log_2 active, cleanup sl_log_1 -- ---- if v_current_status = 3 then -- ---- -- The cleanup thread calls us after it did the delete and -- vacuum of both log tables. If sl_log_2 is empty now, we -- can truncate it and the log switch is done. -- ---- for v_dummy in select 1 from "_prod_replica_set".sl_log_1 loop -- ---- -- Found a row ... log switch is still in progress. -- ---- raise notice 'Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated'; return -1; end loop; raise notice 'Slony-I: log switch to sl_log_2 complete - truncate sl_log_1'; truncate "_prod_replica_set".sl_log_1; if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_log_1' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then execute 'alter table "_prod_replica_set".sl_log_1 set without oids;'; end if; perform "pg_catalog".setval('"_prod_replica_set".sl_log_status', 1); -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table perform "_prod_replica_set".addPartialLogIndices(); return 2; end if; END;
logswitch_start() Initiate a log table switch if none is in progress
DECLARE v_current_status int4; BEGIN -- ---- -- Grab the central configuration lock to prevent race conditions -- while changing the sl_log_status sequence value. -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get the current log status. -- ---- select last_value into v_current_status from "_prod_replica_set".sl_log_status; -- ---- -- status = 0: sl_log_1 active, sl_log_2 clean -- Initiate a switch to sl_log_2. -- ---- if v_current_status = 0 then perform "pg_catalog".setval('"_prod_replica_set".sl_log_status', 3); perform "_prod_replica_set".registry_set_timestamp( 'logswitch.laststart', now()::timestamp); raise notice 'Slony-I: Logswitch to sl_log_2 initiated'; return 2; end if; -- ---- -- status = 1: sl_log_2 active, sl_log_1 clean -- Initiate a switch to sl_log_1. -- ---- if v_current_status = 1 then perform "pg_catalog".setval('"_prod_replica_set".sl_log_status', 2); perform "_prod_replica_set".registry_set_timestamp( 'logswitch.laststart', now()::timestamp); raise notice 'Slony-I: Logswitch to sl_log_1 initiated'; return 1; end if; raise exception 'Previous logswitch still in progress'; END;
logswitch_weekly() Ensure a logswitch is done at least weekly
DECLARE v_now timestamp; v_now_dow int4; v_auto_dow int4; v_auto_time time; v_auto_ts timestamp; v_lastrun timestamp; v_laststart timestamp; v_days_since int4; BEGIN -- ---- -- Check that today is the day to run at all -- ---- v_auto_dow := "_prod_replica_set".registry_get_int4( 'logswitch_weekly.dow', 0); v_now := "pg_catalog".now(); v_now_dow := extract (DOW from v_now); if v_now_dow <> v_auto_dow then perform "_prod_replica_set".registry_set_timestamp( 'logswitch_weekly.lastrun', v_now); return 0; end if; -- ---- -- Check that the last run of this procedure was before and now is -- after the time we should automatically switch logs. -- ---- v_auto_time := "_prod_replica_set".registry_get_text( 'logswitch_weekly.time', '02:00'); v_auto_ts := current_date + v_auto_time; v_lastrun := "_prod_replica_set".registry_get_timestamp( 'logswitch_weekly.lastrun', 'epoch'); if v_lastrun >= v_auto_ts or v_now < v_auto_ts then perform "_prod_replica_set".registry_set_timestamp( 'logswitch_weekly.lastrun', v_now); return 0; end if; -- ---- -- This is the moment configured in dow+time. Check that the -- last logswitch was done more than 2 days ago. -- ---- v_laststart := "_prod_replica_set".registry_get_timestamp( 'logswitch.laststart', 'epoch'); v_days_since := extract (days from (v_now - v_laststart)); if v_days_since < 2 then perform "_prod_replica_set".registry_set_timestamp( 'logswitch_weekly.lastrun', v_now); return 0; end if; -- ---- -- Fire off an automatic logswitch -- ---- perform "_prod_replica_set".logswitch_start(); perform "_prod_replica_set".registry_set_timestamp( 'logswitch_weekly.lastrun', v_now); return 1; END;
This is the trigger that is executed on the origin node that causes updates to be recorded in sl_log_1/sl_log_2.
_Slony_I_logTrigger
Equivalent to 8.1+ ALTER FUNCTION ... STRICT
declare fun alias for $1; parms alias for $2; stmt text; begin stmt := 'ALTER FUNCTION "_prod_replica_set".' || fun || ' ' || parms || ' STRICT;'; execute stmt; return; end
Generate MERGE_SET event to request that sets be merged together. Both sets must exist, and originate on the same node. They must be subscribed by the same set of nodes.
declare p_set_id alias for $1; p_add_id alias for $2; v_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that both sets exist and originate here -- ---- if p_set_id = p_add_id then raise exception 'Slony-I: merged set ids cannot be identical'; end if; select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = p_set_id; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', p_set_id; end if; select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = p_add_id; if not found then raise exception 'Slony-I: set % not found', p_add_id; end if; if v_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', p_add_id; end if; -- ---- -- Check that both sets are subscribed by the same set of nodes -- ---- if exists (select true from "_prod_replica_set".sl_subscribe SUB1 where SUB1.sub_set = p_set_id and SUB1.sub_receiver not in (select SUB2.sub_receiver from "_prod_replica_set".sl_subscribe SUB2 where SUB2.sub_set = p_add_id)) then raise exception 'Slony-I: subscriber lists of set % and % are different', p_set_id, p_add_id; end if; if exists (select true from "_prod_replica_set".sl_subscribe SUB1 where SUB1.sub_set = p_add_id and SUB1.sub_receiver not in (select SUB2.sub_receiver from "_prod_replica_set".sl_subscribe SUB2 where SUB2.sub_set = p_set_id)) then raise exception 'Slony-I: subscriber lists of set % and % are different', p_add_id, p_set_id; end if; -- ---- -- Check that all ENABLE_SUBSCRIPTION events for the set are confirmed -- ---- if exists (select true from "_prod_replica_set".sl_event where ev_type = 'ENABLE_SUBSCRIPTION' and ev_data1 = p_add_id::text and ev_seqno > (select max(con_seqno) from "_prod_replica_set".sl_confirm where con_origin = ev_origin and con_received::text = ev_data3)) then raise exception 'Slony-I: set % has subscriptions in progress - cannot merge', p_add_id; end if; -- ---- -- Create a SYNC event, merge the sets, create a MERGE_SET event -- ---- perform "_prod_replica_set".createEvent('_prod_replica_set', 'SYNC', NULL); perform "_prod_replica_set".mergeSet_int(p_set_id, p_add_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'MERGE_SET', p_set_id::text, p_add_id::text); end;
mergeSet_int(set_id, add_id) - Perform MERGE_SET event, merging all objects from set add_id into set set_id.
declare p_set_id alias for $1; p_add_id alias for $2; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; update "_prod_replica_set".sl_sequence set seq_set = p_set_id where seq_set = p_add_id; update "_prod_replica_set".sl_table set tab_set = p_set_id where tab_set = p_add_id; delete from "_prod_replica_set".sl_subscribe where sub_set = p_add_id; delete from "_prod_replica_set".sl_setsync where ssy_setid = p_add_id; delete from "_prod_replica_set".sl_set where set_id = p_add_id; return p_set_id; end;
moveSet(set_id, new_origin) Generate MOVE_SET event to request that the origin for set set_id be moved to node new_origin
declare p_set_id alias for $1; p_new_origin alias for $2; v_local_node_id int4; v_set_row record; v_sub_row record; v_sync_seqno int8; v_lv_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that the set is locked and that this locking -- happened long enough ago. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select * into v_set_row from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_row.set_origin <> v_local_node_id then raise exception 'Slony-I: set % does not originate on local node', p_set_id; end if; if v_set_row.set_locked isnull then raise exception 'Slony-I: set % is not locked', p_set_id; end if; if v_set_row.set_locked > "_prod_replica_set".getMinXid() then raise exception 'Slony-I: cannot move set % yet, transactions < % are still in progress', p_set_id, v_set_row.set_locked; end if; -- ---- -- Unlock the set -- ---- perform "_prod_replica_set".unlockSet(p_set_id); -- ---- -- Check that the new_origin is an active subscriber of the set -- ---- select * into v_sub_row from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_new_origin; if not found then raise exception 'Slony-I: set % is not subscribed by node %', p_set_id, p_new_origin; end if; if not v_sub_row.sub_active then raise exception 'Slony-I: subsctiption of node % for set % is inactive', p_new_origin, p_set_id; end if; -- ---- -- Reconfigure everything -- ---- perform "_prod_replica_set".moveSet_int(p_set_id, v_local_node_id, p_new_origin, 0); perform "_prod_replica_set".RebuildListenEntries(); -- ---- -- At this time we hold access exclusive locks for every table -- in the set. But we did move the set to the new origin, so the -- createEvent() we are doing now will not record the sequences. -- ---- v_sync_seqno := "_prod_replica_set".createEvent('_prod_replica_set', 'SYNC'); insert into "_prod_replica_set".sl_seqlog (seql_seqid, seql_origin, seql_ev_seqno, seql_last_value) select seq_id, v_local_node_id, v_sync_seqno, seq_last_value from "_prod_replica_set".sl_seqlastvalue where seq_set = p_set_id; -- ---- -- Finally we generate the real event -- ---- return "_prod_replica_set".createEvent('_prod_replica_set', 'MOVE_SET', p_set_id::text, v_local_node_id::text, p_new_origin::text); end;
moveSet(set_id, old_origin, new_origin) Process MOVE_SET event to request that the origin for set set_id be moved from old_origin to node new_origin
declare p_set_id alias for $1; p_old_origin alias for $2; p_new_origin alias for $3; v_local_node_id int4; v_tab_row record; v_sub_row record; v_sub_node int4; v_sub_last int4; v_sub_next int4; v_last_sync int8; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get our local node ID -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); -- ---- -- If we are the old or new origin of the set, we need to -- remove the log trigger from all tables first. -- ---- if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then for v_tab_row in select tab_id from "_prod_replica_set".sl_table where tab_set = p_set_id order by tab_id loop perform "_prod_replica_set".alterTableRestore(v_tab_row.tab_id); end loop; end if; -- On the new origin, raise an event - ACCEPT_SET if v_local_node_id = p_new_origin then perform "_prod_replica_set".createEvent('_prod_replica_set', 'ACCEPT_SET', p_set_id, p_old_origin, p_new_origin); end if; -- ---- -- Next we have to reverse the subscription path -- ---- v_sub_last = p_new_origin; select sub_provider into v_sub_node from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_new_origin; if not found then raise exception 'Slony-I: subscription path broken in moveSet_int'; end if; while v_sub_node <> p_old_origin loop -- ---- -- Tracing node by node, the old receiver is now in -- v_sub_last and the old provider is in v_sub_node. -- ---- -- ---- -- Get the current provider of this node as next -- and change the provider to the previous one in -- the reverse chain. -- ---- select sub_provider into v_sub_next from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = v_sub_node for update; if not found then raise exception 'Slony-I: subscription path broken in moveSet_int'; end if; update "_prod_replica_set".sl_subscribe set sub_provider = v_sub_last where sub_set = p_set_id and sub_receiver = v_sub_node; v_sub_last = v_sub_node; v_sub_node = v_sub_next; end loop; -- ---- -- This includes creating a subscription for the old origin -- ---- insert into "_prod_replica_set".sl_subscribe (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) values (p_set_id, v_sub_last, p_old_origin, true, true); if v_local_node_id = p_old_origin then select coalesce(max(ev_seqno), 0) into v_last_sync from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_type = 'SYNC'; if v_last_sync > 0 then insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) select p_set_id, p_new_origin, v_last_sync, ev_minxid, ev_maxxid, ev_xip, NULL from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_seqno = v_last_sync; else insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) values (p_set_id, p_new_origin, '0', '0', '0', '', NULL); end if; end if; -- ---- -- Now change the ownership of the set. -- ---- update "_prod_replica_set".sl_set set set_origin = p_new_origin where set_id = p_set_id; -- ---- -- On the new origin, delete the obsolete setsync information -- and the subscription. -- ---- if v_local_node_id = p_new_origin then delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; else if v_local_node_id <> p_old_origin then -- -- On every other node, change the setsync so that it will -- pick up from the new origins last known sync. -- delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; select coalesce(max(ev_seqno), 0) into v_last_sync from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_type = 'SYNC'; if v_last_sync > 0 then insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) select p_set_id, p_new_origin, v_last_sync, ev_minxid, ev_maxxid, ev_xip, NULL from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_seqno = v_last_sync; else insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) values (p_set_id, p_new_origin, '0', '0', '0', '', NULL); end if; end if; end if; delete from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_new_origin; -- Regenerate sl_listen since we revised the subscriptions perform "_prod_replica_set".RebuildListenEntries(); -- ---- -- If we are the new or old origin, we have to -- put all the tables into altered state again. -- ---- if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then for v_tab_row in select tab_id from "_prod_replica_set".sl_table where tab_set = p_set_id order by tab_id loop perform "_prod_replica_set".alterTableForReplication(v_tab_row.tab_id); end loop; end if; return p_set_id; end;
moveSet(set_id, old_origin, new_origin, wait_seqno) Process MOVE_SET event to request that the origin for set set_id be moved from old_origin to node new_origin
declare p_set_id alias for $1; p_old_origin alias for $2; p_new_origin alias for $3; p_wait_seqno alias for $4; v_local_node_id int4; v_tab_row record; v_sub_row record; v_sub_node int4; v_sub_last int4; v_sub_next int4; v_last_sync int8; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get our local node ID -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); -- ---- -- If we are the old or new origin of the set, we need to -- remove the log trigger from all tables first. -- ---- if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then for v_tab_row in select tab_id from "_prod_replica_set".sl_table where tab_set = p_set_id order by tab_id loop perform "_prod_replica_set".alterTableRestore(v_tab_row.tab_id); end loop; end if; -- On the new origin, raise an event - ACCEPT_SET if v_local_node_id = p_new_origin then -- Create a SYNC event as well so that the ACCEPT_SET has -- the same snapshot as the last SYNC generated by the new -- origin. This snapshot will be used by other nodes to -- finalize the setsync status. perform "_prod_replica_set".createEvent('_prod_replica_set', 'SYNC', NULL); perform "_prod_replica_set".createEvent('_prod_replica_set', 'ACCEPT_SET', p_set_id::text, p_old_origin::text, p_new_origin::text, p_wait_seqno::text); end if; -- ---- -- Next we have to reverse the subscription path -- ---- v_sub_last = p_new_origin; select sub_provider into v_sub_node from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_new_origin; if not found then raise exception 'Slony-I: subscription path broken in moveSet_int'; end if; while v_sub_node <> p_old_origin loop -- ---- -- Tracing node by node, the old receiver is now in -- v_sub_last and the old provider is in v_sub_node. -- ---- -- ---- -- Get the current provider of this node as next -- and change the provider to the previous one in -- the reverse chain. -- ---- select sub_provider into v_sub_next from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = v_sub_node for update; if not found then raise exception 'Slony-I: subscription path broken in moveSet_int'; end if; update "_prod_replica_set".sl_subscribe set sub_provider = v_sub_last where sub_set = p_set_id and sub_receiver = v_sub_node; v_sub_last = v_sub_node; v_sub_node = v_sub_next; end loop; -- ---- -- This includes creating a subscription for the old origin -- ---- insert into "_prod_replica_set".sl_subscribe (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) values (p_set_id, v_sub_last, p_old_origin, true, true); if v_local_node_id = p_old_origin then select coalesce(max(ev_seqno), 0) into v_last_sync from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_type = 'SYNC'; if v_last_sync > 0 then insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) select p_set_id, p_new_origin, v_last_sync, ev_minxid, ev_maxxid, ev_xip, NULL from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_seqno = v_last_sync; else insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) values (p_set_id, p_new_origin, '0', '0', '0', '', NULL); end if; end if; -- ---- -- Now change the ownership of the set. -- ---- update "_prod_replica_set".sl_set set set_origin = p_new_origin where set_id = p_set_id; -- ---- -- On the new origin, delete the obsolete setsync information -- and the subscription. -- ---- if v_local_node_id = p_new_origin then delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; else if v_local_node_id <> p_old_origin then -- -- On every other node, change the setsync so that it will -- pick up from the new origins last known sync. -- delete from "_prod_replica_set".sl_setsync where ssy_setid = p_set_id; select coalesce(max(ev_seqno), 0) into v_last_sync from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_type = 'SYNC'; if v_last_sync > 0 then insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) select p_set_id, p_new_origin, v_last_sync, ev_minxid, ev_maxxid, ev_xip, NULL from "_prod_replica_set".sl_event where ev_origin = p_new_origin and ev_seqno = v_last_sync; else insert into "_prod_replica_set".sl_setsync (ssy_setid, ssy_origin, ssy_seqno, ssy_minxid, ssy_maxxid, ssy_xip, ssy_action_list) values (p_set_id, p_new_origin, '0', '0', '0', '', NULL); end if; end if; end if; delete from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = p_new_origin; -- Regenerate sl_listen since we revised the subscriptions perform "_prod_replica_set".RebuildListenEntries(); -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table perform "_prod_replica_set".addPartialLogIndices(); -- ---- -- If we are the new or old origin, we have to -- put all the tables into altered state again. -- ---- if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then for v_tab_row in select tab_id from "_prod_replica_set".sl_table where tab_set = p_set_id order by tab_id loop perform "_prod_replica_set".alterTableForReplication(v_tab_row.tab_id); end loop; end if; return p_set_id; end;
Returns 1/0 based on whether or not the DB is running a version earlier than 7.4
select 0
Delete all data and suppress index maintenance
declare p_tab_id alias for $1; v_tab_oid oid; v_tab_fqname text; begin -- ---- -- Get the OID and fully qualified name for the table -- --- select PGC.oid, "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) as tab_fqname into v_tab_oid, v_tab_fqname from "_prod_replica_set".sl_table T, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where T.tab_id = p_tab_id and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then raise exception 'Table with ID % not found in sl_table', p_tab_id; end if; -- ---- -- Try using truncate to empty the table and fallback to -- delete on error. -- ---- execute 'truncate ' || "_prod_replica_set".slon_quote_input(v_tab_fqname); raise notice 'truncate of % succeeded', v_tab_fqname; -- ---- -- Setting pg_class.relhasindex to false will cause copy not to -- maintain any indexes. At the end of the copy we will reenable -- them and reindex the table. This bulk creating of indexes is -- faster. -- ---- update pg_class set relhasindex = 'f' where oid = v_tab_oid; return 1; exception when others then raise notice 'truncate of % failed - doing delete', v_tab_fqname; update pg_class set relhasindex = 'f' where oid = v_tab_oid; execute 'delete from only ' || "_prod_replica_set".slon_quote_input(v_tab_fqname); return 0; end;
ReachableFromNode(receiver, blacklist) Find all nodes that <receiver> can receive events from without using nodes in <blacklist> as a relay.
declare v_node alias for $1 ; v_blacklist alias for $2 ; v_ignore int4[] ; v_reachable_edge_last int4[] ; v_reachable_edge_new int4[] default '{}' ; v_server record ; begin v_reachable_edge_last := array[v_node] ; v_ignore := v_blacklist || array[v_node] ; return next v_node ; while v_reachable_edge_last != '{}' loop v_reachable_edge_new := '{}' ; for v_server in select pa_server as no_id from "_prod_replica_set".sl_path where pa_client = ANY(v_reachable_edge_last) and pa_server != ALL(v_ignore) loop if v_server.no_id != ALL(v_ignore) then v_ignore := v_ignore || array[v_server.no_id] ; v_reachable_edge_new := v_reachable_edge_new || array[v_server.no_id] ; return next v_server.no_id ; end if ; end loop ; v_reachable_edge_last := v_reachable_edge_new ; end loop ; return ; end ;
RebuildListenEntries() Invoked by various subscription and path modifying functions, this rewrites the sl_listen entries, adding in all the ones required to allow communications between nodes in the Slony-I cluster.
declare v_row record; skip boolean; begin -- First remove the entire configuration delete from "_prod_replica_set".sl_listen; -- Second populate the sl_listen configuration with a full -- network of all possible paths. insert into "_prod_replica_set".sl_listen (li_origin, li_provider, li_receiver) select pa_server, pa_server, pa_client from "_prod_replica_set".sl_path; while true loop insert into "_prod_replica_set".sl_listen (li_origin, li_provider, li_receiver) select distinct li_origin, pa_server, pa_client from "_prod_replica_set".sl_listen, "_prod_replica_set".sl_path where li_receiver = pa_server and li_origin <> pa_client except select li_origin, li_provider, li_receiver from "_prod_replica_set".sl_listen; if not found then exit; end if; end loop; -- We now replace specific event-origin,receiver combinations -- with a configuration that tries to avoid events arriving at -- a node before the data provider actually has the data ready. -- Loop over every possible pair of receiver and event origin for v_row in select N1.no_id as receiver, N2.no_id as origin from "_prod_replica_set".sl_node as N1, "_prod_replica_set".sl_node as N2 where N1.no_id <> N2.no_id loop skip := 'f'; -- 1st choice: -- If we use the event origin as a data provider for any -- set that originates on that very node, we are a direct -- subscriber to that origin and listen there only. if exists (select true from "_prod_replica_set".sl_set, "_prod_replica_set".sl_subscribe where set_origin = v_row.origin and sub_set = set_id and sub_provider = v_row.origin and sub_receiver = v_row.receiver and sub_active) then delete from "_prod_replica_set".sl_listen where li_origin = v_row.origin and li_receiver = v_row.receiver; insert into "_prod_replica_set".sl_listen (li_origin, li_provider, li_receiver) values (v_row.origin, v_row.origin, v_row.receiver); skip := 't'; end if; if skip then skip := 'f'; else -- 2nd choice: -- If we are subscribed to any set originating on this -- event origin, we want to listen on all data providers -- we use for this origin. We are a cascaded subscriber -- for sets from this node. if exists (select true from "_prod_replica_set".sl_set, "_prod_replica_set".sl_subscribe where set_origin = v_row.origin and sub_set = set_id and sub_receiver = v_row.receiver and sub_active) then delete from "_prod_replica_set".sl_listen where li_origin = v_row.origin and li_receiver = v_row.receiver; insert into "_prod_replica_set".sl_listen (li_origin, li_provider, li_receiver) select distinct set_origin, sub_provider, v_row.receiver from "_prod_replica_set".sl_set, "_prod_replica_set".sl_subscribe where set_origin = v_row.origin and sub_set = set_id and sub_receiver = v_row.receiver and sub_active; end if; end if; end loop ; return null ; end ;
RebuildListenEntriesOne(p_origin, p_receiver) Rebuilding of sl_listen entries for one origin, receiver pair.
declare p_origin alias for $1; p_receiver alias for $2; v_row record; begin -- 1. If the receiver is subscribed to any set from the origin, -- listen on the same provider(s). for v_row in select distinct sub_provider from "_prod_replica_set".sl_subscribe, "_prod_replica_set".sl_set, "_prod_replica_set".sl_path where sub_set = set_id and set_origin = p_origin and sub_receiver = p_receiver and sub_provider = pa_server and sub_receiver = pa_client loop perform "_prod_replica_set".storeListen_int(p_origin, v_row.sub_provider, p_receiver); end loop; if found then return 1; end if; -- 2. If the receiver has a direct path to the provider, -- use that. if exists (select true from "_prod_replica_set".sl_path where pa_server = p_origin and pa_client = p_receiver) then perform "_prod_replica_set".storeListen_int(p_origin, p_origin, p_receiver); return 1; end if; -- 3. Listen on every node that is either provider for the -- receiver or is using the receiver as provider (follow the -- normal subscription routes). for v_row in select distinct provider from ( select sub_provider as provider from "_prod_replica_set".sl_subscribe where sub_receiver = p_receiver union select sub_receiver as provider from "_prod_replica_set".sl_subscribe where sub_provider = p_receiver and exists (select true from "_prod_replica_set".sl_path where pa_server = sub_receiver and pa_client = sub_provider) ) as S loop perform "_prod_replica_set".storeListen_int(p_origin, v_row.provider, p_receiver); end loop; if found then return 1; end if; -- 4. If all else fails - meaning there are no subscriptions to -- guide us to the right path - use every node we have a path -- to as provider. This normally only happens when the cluster -- is built or a new node added. This brute force fallback -- ensures that events will propagate if possible at all. for v_row in select pa_server as provider from "_prod_replica_set".sl_path where pa_client = p_receiver loop perform "_prod_replica_set".storeListen_int(p_origin, v_row.provider, p_receiver); end loop; if found then return 1; end if; return 0; end;
Register (uniquely) the node connection so that only one slon can service the node
declare p_nodeid alias for $1; begin insert into "_prod_replica_set".sl_nodelock (nl_nodeid, nl_backendpid) values (p_nodeid, pg_backend_pid()); return 0; end;
registry_get_int4(key, value) Get a registry value. If not present, set and return the default.
DECLARE p_key alias for $1; p_default alias for $2; v_value int4; BEGIN select reg_int4 into v_value from "_prod_replica_set".sl_registry where reg_key = p_key; if not found then v_value = p_default; if p_default notnull then perform "_prod_replica_set".registry_set_int4(p_key, p_default); end if; else if v_value is null then raise exception 'Slony-I: registry key % is not an int4 value', p_key; end if; end if; return v_value; END;
registry_get_text(key, value) Get a registry value. If not present, set and return the default.
DECLARE p_key alias for $1; p_default alias for $2; v_value text; BEGIN select reg_text into v_value from "_prod_replica_set".sl_registry where reg_key = p_key; if not found then v_value = p_default; if p_default notnull then perform "_prod_replica_set".registry_set_text(p_key, p_default); end if; else if v_value is null then raise exception 'Slony-I: registry key % is not a text value', p_key; end if; end if; return v_value; END;
registry_get_timestamp(key, value) Get a registry value. If not present, set and return the default.
DECLARE p_key alias for $1; p_default alias for $2; v_value timestamp; BEGIN select reg_timestamp into v_value from "_prod_replica_set".sl_registry where reg_key = p_key; if not found then v_value = p_default; if p_default notnull then perform "_prod_replica_set".registry_set_timestamp(p_key, p_default); end if; else if v_value is null then raise exception 'Slony-I: registry key % is not an timestamp value', p_key; end if; end if; return v_value; END;
registry_set_int4(key, value) Set or delete a registry value
DECLARE p_key alias for $1; p_value alias for $2; BEGIN if p_value is null then delete from "_prod_replica_set".sl_registry where reg_key = p_key; else lock table "_prod_replica_set".sl_registry; update "_prod_replica_set".sl_registry set reg_int4 = p_value where reg_key = p_key; if not found then insert into "_prod_replica_set".sl_registry (reg_key, reg_int4) values (p_key, p_value); end if; end if; return p_value; END;
registry_set_text(key, value) Set or delete a registry value
DECLARE p_key alias for $1; p_value alias for $2; BEGIN if p_value is null then delete from "_prod_replica_set".sl_registry where reg_key = p_key; else lock table "_prod_replica_set".sl_registry; update "_prod_replica_set".sl_registry set reg_text = p_value where reg_key = p_key; if not found then insert into "_prod_replica_set".sl_registry (reg_key, reg_text) values (p_key, p_value); end if; end if; return p_value; END;
registry_set_timestamp(key, value) Set or delete a registry value
DECLARE p_key alias for $1; p_value alias for $2; BEGIN if p_value is null then delete from "_prod_replica_set".sl_registry where reg_key = p_key; else lock table "_prod_replica_set".sl_registry; update "_prod_replica_set".sl_registry set reg_timestamp = p_value where reg_key = p_key; if not found then insert into "_prod_replica_set".sl_registry (reg_key, reg_timestamp) values (p_key, p_value); end if; end if; return p_value; END;
Add a partition table to replication. tab_idxname is optional - if NULL, then we use the primary key. This function looks up replication configuration via the parent table.
declare p_tab_id alias for $1; p_nspname alias for $2; p_tabname alias for $3; p_idxname alias for $4; p_comment alias for $5; prec record; prec2 record; v_set_id int4; begin -- Look up the parent table; fail if it does not exist select c1.oid into prec from pg_catalog.pg_class c1, pg_catalog.pg_class c2, pg_catalog.pg_inherits i, pg_catalog.pg_namespace n where c1.oid = i.inhparent and c2.oid = i.inhrelid and n.oid = c2.relnamespace and n.nspname = p_nspname and c2.relname = p_tabname; if not found then raise exception 'replicate_partition: No parent table found for %.%!', p_nspname, p_tabname; end if; -- The parent table tells us what replication set to use select tab_set into prec2 from "_prod_replica_set".sl_table where tab_reloid = prec.oid; if not found then raise exception 'replicate_partition: Parent table % for new partition %.% is not replicated!', prec.oid, p_nspname, p_tabname; end if; v_set_id := prec2.tab_set; -- Now, we have all the parameters necessary to run add_empty_table_to_replication... return "_prod_replica_set".add_empty_table_to_replication(v_set_id, p_tab_id, p_nspname, p_tabname, p_idxname, p_comment); end
sequenceLastValue(p_seqname) Utility function used in sl_seqlastvalue view to compactly get the last value from the requested sequence.
declare p_seqname alias for $1; v_seq_row record; begin for v_seq_row in execute 'select last_value from ' || "_prod_replica_set".slon_quote_input(p_seqname) loop return v_seq_row.last_value; end loop; -- not reached end;
sequenceSetValue (seq_id, seq_origin, ev_seqno, last_value) Set sequence seq_id to have new value last_value.
declare p_seq_id alias for $1; p_seq_origin alias for $2; p_ev_seqno alias for $3; p_last_value alias for $4; v_fqname text; begin -- ---- -- Get the sequences fully qualified name -- ---- select "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) into v_fqname from "_prod_replica_set".sl_sequence SQ, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where SQ.seq_id = p_seq_id and SQ.seq_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then raise exception 'Slony-I: sequenceSetValue(): sequence % not found', p_seq_id; end if; -- ---- -- Update it to the new value -- ---- execute 'select setval(''' || v_fqname || ''', ''' || p_last_value || ''')'; insert into "_prod_replica_set".sl_seqlog (seql_seqid, seql_origin, seql_ev_seqno, seql_last_value) values (p_seq_id, p_seq_origin, p_ev_seqno, p_last_value); return p_seq_id; end;
setAddSequence (set_id, seq_id, seq_fqname, seq_comment) On the origin node for set set_id, add sequence seq_fqname to the replication set, and raise SET_ADD_SEQUENCE to cause this to replicate to subscriber nodes.
declare p_set_id alias for $1; p_seq_id alias for $2; p_fqname alias for $3; p_seq_comment alias for $4; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that we are the origin of the set -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id; if not found then raise exception 'Slony-I: setAddSequence(): set % not found', p_set_id; end if; if v_set_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: setAddSequence(): set % has remote origin - submit to origin node', p_set_id; end if; if exists (select true from "_prod_replica_set".sl_subscribe where sub_set = p_set_id) then raise exception 'Slony-I: cannot add sequence to currently subscribed set %', p_set_id; end if; -- ---- -- Add the sequence to the set and generate the SET_ADD_SEQUENCE event -- ---- perform "_prod_replica_set".setAddSequence_int(p_set_id, p_seq_id, p_fqname, p_seq_comment); return "_prod_replica_set".createEvent('_prod_replica_set', 'SET_ADD_SEQUENCE', p_set_id::text, p_seq_id::text, p_fqname::text, p_seq_comment::text); end;
setAddSequence_int (set_id, seq_id, seq_fqname, seq_comment) This processes the SET_ADD_SEQUENCE event. On remote nodes that subscribe to set_id, add the sequence to the replication set.
declare p_set_id alias for $1; p_seq_id alias for $2; p_fqname alias for $3; p_seq_comment alias for $4; v_local_node_id int4; v_set_origin int4; v_sub_provider int4; v_relkind char; v_seq_reloid oid; v_seq_relname name; v_seq_nspname name; v_sync_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- For sets with a remote origin, check that we are subscribed -- to that set. Otherwise we ignore the sequence because it might -- not even exist in our database. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id; if not found then raise exception 'Slony-I: setAddSequence_int(): set % not found', p_set_id; end if; if v_set_origin != v_local_node_id then select sub_provider into v_sub_provider from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = "_prod_replica_set".getLocalNodeId('_prod_replica_set'); if not found then return 0; end if; end if; -- ---- -- Get the sequences OID and check that it is a sequence -- ---- select PGC.oid, PGC.relkind, PGC.relname, PGN.nspname into v_seq_reloid, v_relkind, v_seq_relname, v_seq_nspname from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where PGC.relnamespace = PGN.oid and "_prod_replica_set".slon_quote_input(p_fqname) = "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname); if not found then raise exception 'Slony-I: setAddSequence_int(): sequence % not found', p_fqname; end if; if v_relkind != 'S' then raise exception 'Slony-I: setAddSequence_int(): % is not a sequence', p_fqname; end if; select 1 into v_sync_row from "_prod_replica_set".sl_sequence where seq_id = p_seq_id; if not found then v_relkind := 'o'; -- all is OK else raise exception 'Slony-I: setAddSequence_int(): sequence ID % has already been assigned', p_seq_id; end if; -- ---- -- Add the sequence to sl_sequence -- ---- insert into "_prod_replica_set".sl_sequence (seq_id, seq_reloid, seq_relname, seq_nspname, seq_set, seq_comment) values (p_seq_id, v_seq_reloid, v_seq_relname, v_seq_nspname, p_set_id, p_seq_comment); -- ---- -- On the set origin, fake a sl_seqlog row for the last sync event -- ---- if v_set_origin = v_local_node_id then for v_sync_row in select coalesce (max(ev_seqno), 0) as ev_seqno from "_prod_replica_set".sl_event where ev_origin = v_local_node_id and ev_type = 'SYNC' loop insert into "_prod_replica_set".sl_seqlog (seql_seqid, seql_origin, seql_ev_seqno, seql_last_value) values (p_seq_id, v_local_node_id, v_sync_row.ev_seqno, "_prod_replica_set".sequenceLastValue(p_fqname)); end loop; end if; return p_seq_id; end;
setAddTable (set_id, tab_id, tab_fqname, tab_idxname, tab_comment) Add table tab_fqname to replication set on origin node, and generate SET_ADD_TABLE event to allow this to propagate to other nodes. Note that the table id, tab_id, must be unique ACROSS ALL SETS.
declare p_set_id alias for $1; p_tab_id alias for $2; p_fqname alias for $3; p_tab_idxname alias for $4; p_tab_comment alias for $5; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that we are the origin of the set -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id; if not found then raise exception 'Slony-I: setAddTable(): set % not found', p_set_id; end if; if v_set_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: setAddTable(): set % has remote origin', p_set_id; end if; if exists (select true from "_prod_replica_set".sl_subscribe where sub_set = p_set_id) then raise exception 'Slony-I: cannot add table to currently subscribed set %', p_set_id; end if; -- ---- -- Add the table to the set and generate the SET_ADD_TABLE event -- ---- perform "_prod_replica_set".setAddTable_int(p_set_id, p_tab_id, p_fqname, p_tab_idxname, p_tab_comment); return "_prod_replica_set".createEvent('_prod_replica_set', 'SET_ADD_TABLE', p_set_id::text, p_tab_id::text, p_fqname::text, p_tab_idxname::text, p_tab_comment::text); end;
setAddTable_int (set_id, tab_id, tab_fqname, tab_idxname, tab_comment) This function processes the SET_ADD_TABLE event on remote nodes, adding a table to replication if the remote node is subscribing to its replication set.
declare p_set_id alias for $1; p_tab_id alias for $2; p_fqname alias for $3; p_tab_idxname alias for $4; p_tab_comment alias for $5; v_tab_relname name; v_tab_nspname name; v_local_node_id int4; v_set_origin int4; v_sub_provider int4; v_relkind char; v_tab_reloid oid; v_pkcand_nn boolean; v_prec record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- For sets with a remote origin, check that we are subscribed -- to that set. Otherwise we ignore the table because it might -- not even exist in our database. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id; if not found then raise exception 'Slony-I: setAddTable_int(): set % not found', p_set_id; end if; if v_set_origin != v_local_node_id then select sub_provider into v_sub_provider from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = "_prod_replica_set".getLocalNodeId('_prod_replica_set'); if not found then return 0; end if; end if; -- ---- -- Get the tables OID and check that it is a real table -- ---- select PGC.oid, PGC.relkind, PGC.relname, PGN.nspname into v_tab_reloid, v_relkind, v_tab_relname, v_tab_nspname from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where PGC.relnamespace = PGN.oid and "_prod_replica_set".slon_quote_input(p_fqname) = "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname); if not found then raise exception 'Slony-I: setAddTable_int(): table % not found', p_fqname; end if; if v_relkind != 'r' then raise exception 'Slony-I: setAddTable_int(): % is not a regular table', p_fqname; end if; if not exists (select indexrelid from "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGC where PGX.indrelid = v_tab_reloid and PGX.indexrelid = PGC.oid and PGC.relname = p_tab_idxname) then raise exception 'Slony-I: setAddTable_int(): table % has no index %', p_fqname, p_tab_idxname; end if; -- ---- -- Verify that the columns in the PK (or candidate) are not NULLABLE -- ---- v_pkcand_nn := 'f'; for v_prec in select attname from "pg_catalog".pg_attribute where attrelid = (select oid from "pg_catalog".pg_class where oid = v_tab_reloid) and attname in (select attname from "pg_catalog".pg_attribute where attrelid = (select oid from "pg_catalog".pg_class PGC, "pg_catalog".pg_index PGX where PGC.relname = p_tab_idxname and PGX.indexrelid=PGC.oid and PGX.indrelid = v_tab_reloid)) and attnotnull <> 't' loop raise notice 'Slony-I: setAddTable_int: table % PK column % nullable', p_fqname, v_prec.attname; v_pkcand_nn := 't'; end loop; if v_pkcand_nn then raise exception 'Slony-I: setAddTable_int: table % not replicable!', p_fqname; end if; select * into v_prec from "_prod_replica_set".sl_table where tab_id = p_tab_id; if not found then v_pkcand_nn := 't'; -- No-op -- All is well else raise exception 'Slony-I: setAddTable_int: table id % has already been assigned!', p_tab_id; end if; -- ---- -- Add the table to sl_table and create the trigger on it. -- ---- insert into "_prod_replica_set".sl_table (tab_id, tab_reloid, tab_relname, tab_nspname, tab_set, tab_idxname, tab_altered, tab_comment) values (p_tab_id, v_tab_reloid, v_tab_relname, v_tab_nspname, p_set_id, p_tab_idxname, false, p_tab_comment); perform "_prod_replica_set".alterTableForReplication(p_tab_id); return p_tab_id; end;
setDropSequence (seq_id) On the origin node for the set, drop sequence seq_id from replication set, and raise SET_DROP_SEQUENCE to cause this to replicate to subscriber nodes.
declare p_seq_id alias for $1; v_set_id int4; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Determine set id for this sequence -- ---- select seq_set into v_set_id from "_prod_replica_set".sl_sequence where seq_id = p_seq_id; -- ---- -- Ensure sequence exists -- ---- if not found then raise exception 'Slony-I: setDropSequence_int(): sequence % not found', p_seq_id; end if; -- ---- -- Check that we are the origin of the set -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = v_set_id; if not found then raise exception 'Slony-I: setDropSequence(): set % not found', v_set_id; end if; if v_set_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: setDropSequence(): set % has origin at another node - submit this to that node', v_set_id; end if; -- ---- -- Add the sequence to the set and generate the SET_ADD_SEQUENCE event -- ---- perform "_prod_replica_set".setDropSequence_int(p_seq_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'SET_DROP_SEQUENCE', p_seq_id::text); end;
setDropSequence_int (seq_id) This processes the SET_DROP_SEQUENCE event. On remote nodes that subscribe to the set containing sequence seq_id, drop the sequence from the replication set.
declare p_seq_id alias for $1; v_set_id int4; v_local_node_id int4; v_set_origin int4; v_sub_provider int4; v_relkind char; v_sync_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Determine set id for this sequence -- ---- select seq_set into v_set_id from "_prod_replica_set".sl_sequence where seq_id = p_seq_id; -- ---- -- Ensure sequence exists -- ---- if not found then return 0; end if; -- ---- -- For sets with a remote origin, check that we are subscribed -- to that set. Otherwise we ignore the sequence because it might -- not even exist in our database. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = v_set_id; if not found then raise exception 'Slony-I: setDropSequence_int(): set % not found', v_set_id; end if; if v_set_origin != v_local_node_id then select sub_provider into v_sub_provider from "_prod_replica_set".sl_subscribe where sub_set = v_set_id and sub_receiver = "_prod_replica_set".getLocalNodeId('_prod_replica_set'); if not found then return 0; end if; end if; -- ---- -- drop the sequence from sl_sequence, sl_seqlog -- ---- delete from "_prod_replica_set".sl_seqlog where seql_seqid = p_seq_id; delete from "_prod_replica_set".sl_sequence where seq_id = p_seq_id; return p_seq_id; end;
setDropTable (tab_id) Drop table tab_id from set on origin node, and generate SET_DROP_TABLE event to allow this to propagate to other nodes.
declare p_tab_id alias for $1; v_set_id int4; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Determine the set_id -- ---- select tab_set into v_set_id from "_prod_replica_set".sl_table where tab_id = p_tab_id; -- ---- -- Ensure table exists -- ---- if not found then raise exception 'Slony-I: setDropTable_int(): table % not found', p_tab_id; end if; -- ---- -- Check that we are the origin of the set -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = v_set_id; if not found then raise exception 'Slony-I: setDropTable(): set % not found', v_set_id; end if; if v_set_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: setDropTable(): set % has remote origin', v_set_id; end if; -- ---- -- Drop the table from the set and generate the SET_ADD_TABLE event -- ---- perform "_prod_replica_set".setDropTable_int(p_tab_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'SET_DROP_TABLE', p_tab_id::text); end;
setDropTable_int (tab_id) This function processes the SET_DROP_TABLE event on remote nodes, dropping a table from replication if the remote node is subscribing to its replication set.
declare p_tab_id alias for $1; v_set_id int4; v_local_node_id int4; v_set_origin int4; v_sub_provider int4; v_tab_reloid oid; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Determine the set_id -- ---- select tab_set into v_set_id from "_prod_replica_set".sl_table where tab_id = p_tab_id; -- ---- -- Ensure table exists -- ---- if not found then return 0; end if; -- ---- -- For sets with a remote origin, check that we are subscribed -- to that set. Otherwise we ignore the table because it might -- not even exist in our database. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = v_set_id; if not found then raise exception 'Slony-I: setDropTable_int(): set % not found', v_set_id; end if; if v_set_origin != v_local_node_id then select sub_provider into v_sub_provider from "_prod_replica_set".sl_subscribe where sub_set = v_set_id and sub_receiver = "_prod_replica_set".getLocalNodeId('_prod_replica_set'); if not found then return 0; end if; end if; -- ---- -- Drop the table from sl_table and drop trigger from it. -- ---- perform "_prod_replica_set".alterTableRestore(p_tab_id); perform "_prod_replica_set".tableDropKey(p_tab_id); delete from "_prod_replica_set".sl_table where tab_id = p_tab_id; return p_tab_id; end;
setMoveSequence(p_seq_id, p_new_set_id) - This generates the SET_MOVE_SEQUENCE event, after validation, notably that both sets exist, are distinct, and have exactly the same subscription lists
declare p_seq_id alias for $1; p_new_set_id alias for $2; v_old_set_id int4; v_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get the sequences current set -- ---- select seq_set into v_old_set_id from "_prod_replica_set".sl_sequence where seq_id = p_seq_id; if not found then raise exception 'Slony-I: setMoveSequence(): sequence %d not found', p_seq_id; end if; -- ---- -- Check that both sets exist and originate here -- ---- if p_new_set_id = v_old_set_id then raise exception 'Slony-I: setMoveSequence(): set ids cannot be identical'; end if; select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = p_new_set_id; if not found then raise exception 'Slony-I: setMoveSequence(): set % not found', p_new_set_id; end if; if v_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: setMoveSequence(): set % does not originate on local node', p_new_set_id; end if; select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = v_old_set_id; if not found then raise exception 'Slony-I: set % not found', v_old_set_id; end if; if v_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', v_old_set_id; end if; -- ---- -- Check that both sets are subscribed by the same set of nodes -- ---- if exists (select true from "_prod_replica_set".sl_subscribe SUB1 where SUB1.sub_set = p_new_set_id and SUB1.sub_receiver not in (select SUB2.sub_receiver from "_prod_replica_set".sl_subscribe SUB2 where SUB2.sub_set = v_old_set_id)) then raise exception 'Slony-I: subscriber lists of set % and % are different', p_new_set_id, v_old_set_id; end if; if exists (select true from "_prod_replica_set".sl_subscribe SUB1 where SUB1.sub_set = v_old_set_id and SUB1.sub_receiver not in (select SUB2.sub_receiver from "_prod_replica_set".sl_subscribe SUB2 where SUB2.sub_set = p_new_set_id)) then raise exception 'Slony-I: subscriber lists of set % and % are different', v_old_set_id, p_new_set_id; end if; -- ---- -- Change the set the sequence belongs to -- ---- perform "_prod_replica_set".setMoveSequence_int(p_seq_id, p_new_set_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'SET_MOVE_SEQUENCE', p_seq_id::text, p_new_set_id::text); end;
setMoveSequence_int(p_seq_id, p_new_set_id) - processes the SET_MOVE_SEQUENCE event, moving a sequence to another replication set.
declare p_seq_id alias for $1; p_new_set_id alias for $2; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Move the sequence to the new set -- ---- update "_prod_replica_set".sl_sequence set seq_set = p_new_set_id where seq_id = p_seq_id; return p_seq_id; end;
This processes the SET_MOVE_TABLE event. The table is moved to the destination set.
declare p_tab_id alias for $1; p_new_set_id alias for $2; v_old_set_id int4; v_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get the tables current set -- ---- select tab_set into v_old_set_id from "_prod_replica_set".sl_table where tab_id = p_tab_id; if not found then raise exception 'Slony-I: table %d not found', p_tab_id; end if; -- ---- -- Check that both sets exist and originate here -- ---- if p_new_set_id = v_old_set_id then raise exception 'Slony-I: set ids cannot be identical'; end if; select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = p_new_set_id; if not found then raise exception 'Slony-I: set % not found', p_new_set_id; end if; if v_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', p_new_set_id; end if; select set_origin into v_origin from "_prod_replica_set".sl_set where set_id = v_old_set_id; if not found then raise exception 'Slony-I: set % not found', v_old_set_id; end if; if v_origin != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: set % does not originate on local node', v_old_set_id; end if; -- ---- -- Check that both sets are subscribed by the same set of nodes -- ---- if exists (select true from "_prod_replica_set".sl_subscribe SUB1 where SUB1.sub_set = p_new_set_id and SUB1.sub_receiver not in (select SUB2.sub_receiver from "_prod_replica_set".sl_subscribe SUB2 where SUB2.sub_set = v_old_set_id)) then raise exception 'Slony-I: subscriber lists of set % and % are different', p_new_set_id, v_old_set_id; end if; if exists (select true from "_prod_replica_set".sl_subscribe SUB1 where SUB1.sub_set = v_old_set_id and SUB1.sub_receiver not in (select SUB2.sub_receiver from "_prod_replica_set".sl_subscribe SUB2 where SUB2.sub_set = p_new_set_id)) then raise exception 'Slony-I: subscriber lists of set % and % are different', v_old_set_id, p_new_set_id; end if; -- ---- -- Change the set the table belongs to -- ---- perform "_prod_replica_set".createEvent('_prod_replica_set', 'SYNC', NULL); perform "_prod_replica_set".setMoveTable_int(p_tab_id, p_new_set_id); return "_prod_replica_set".createEvent('_prod_replica_set', 'SET_MOVE_TABLE', p_tab_id::text, p_new_set_id::text); end;
declare p_tab_id alias for $1; p_new_set_id alias for $2; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Move the table to the new set -- ---- update "_prod_replica_set".sl_table set tab_set = p_new_set_id where tab_id = p_tab_id; return p_tab_id; end;
setSessionRole(username, role) - set role for session. role can be "normal" or "slon"; setting the latter is necessary, on subscriber nodes, in order to override the denyaccess() trigger attached to subscribing tables.
_Slony_I_setSessionRole
Brutally quote the given text
declare p_tab_fqname alias for $1; v_fqname text default ''; begin v_fqname := '"' || replace(p_tab_fqname,'"','""') || '"'; return v_fqname; end;
quote all words that aren't quoted yet
declare p_tab_fqname alias for $1; v_nsp_name text; v_tab_name text; v_i integer; v_l integer; v_pq2 integer; begin v_l := length(p_tab_fqname); -- Let us search for the dot if p_tab_fqname like '"%' then -- if the first part of the ident starts with a double quote, search -- for the closing double quote, skipping over double double quotes. v_i := 2; while v_i <= v_l loop if substr(p_tab_fqname, v_i, 1) != '"' then v_i := v_i + 1; else v_i := v_i + 1; if substr(p_tab_fqname, v_i, 1) != '"' then exit; end if; v_i := v_i + 1; end if; end loop; else -- first part of ident is not quoted, search for the dot directly v_i := 1; while v_i <= v_l loop if substr(p_tab_fqname, v_i, 1) = '.' then exit; end if; v_i := v_i + 1; end loop; end if; -- v_i now points at the dot or behind the string. if substr(p_tab_fqname, v_i, 1) = '.' then -- There is a dot now, so split the ident into its namespace -- and objname parts and make sure each is quoted v_nsp_name := substr(p_tab_fqname, 1, v_i - 1); v_tab_name := substr(p_tab_fqname, v_i + 1); if v_nsp_name not like '"%' then v_nsp_name := '"' || replace(v_nsp_name, '"', '""') || '"'; end if; if v_tab_name not like '"%' then v_tab_name := '"' || replace(v_tab_name, '"', '""') || '"'; end if; return v_nsp_name || '.' || v_tab_name; else -- No dot ... must be just an ident without schema if p_tab_fqname like '"%' then return p_tab_fqname; else return '"' || replace(p_tab_fqname, '"', '""') || '"'; end if; end if; end;
Returns the version number of the slony schema
begin return '' || "_prod_replica_set".slonyVersionMajor() || '.' || "_prod_replica_set".slonyVersionMinor() || '.' || "_prod_replica_set".slonyVersionPatchlevel(); end;
Returns the major version number of the slony schema
begin return 1; end;
Returns the minor version number of the slony schema
begin return 2; end;
Returns the version patch level of the slony schema
begin return 12; end;
FUNCTION storeListen (li_origin, li_provider, li_receiver) generate STORE_LISTEN event, indicating that receiver node li_receiver listens to node li_provider in order to get messages coming from node li_origin.
declare p_origin alias for $1; p_provider alias for $2; p_receiver alias for $3; begin perform "_prod_replica_set".storeListen_int (p_origin, p_provider, p_receiver); return "_prod_replica_set".createEvent ('_prod_replica_set', 'STORE_LISTEN', p_origin::text, p_provider::text, p_receiver::text); end;
FUNCTION storeListen_int (li_origin, li_provider, li_receiver) Process STORE_LISTEN event, indicating that receiver node li_receiver listens to node li_provider in order to get messages coming from node li_origin.
declare p_li_origin alias for $1; p_li_provider alias for $2; p_li_receiver alias for $3; v_exists int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; select 1 into v_exists from "_prod_replica_set".sl_listen where li_origin = p_li_origin and li_provider = p_li_provider and li_receiver = p_li_receiver; if not found then -- ---- -- In case we receive STORE_LISTEN events before we know -- about the nodes involved in this, we generate those nodes -- as pending. -- ---- if not exists (select 1 from "_prod_replica_set".sl_node where no_id = p_li_origin) then perform "_prod_replica_set".storeNode_int (p_li_origin, '<event pending>', 'f'); end if; if not exists (select 1 from "_prod_replica_set".sl_node where no_id = p_li_provider) then perform "_prod_replica_set".storeNode_int (p_li_provider, '<event pending>', 'f'); end if; if not exists (select 1 from "_prod_replica_set".sl_node where no_id = p_li_receiver) then perform "_prod_replica_set".storeNode_int (p_li_receiver, '<event pending>', 'f'); end if; insert into "_prod_replica_set".sl_listen (li_origin, li_provider, li_receiver) values (p_li_origin, p_li_provider, p_li_receiver); end if; return 0; end;
no_id - Node ID # no_comment - Human-oriented comment no_spool - Flag for virtual spool nodes Generate the STORE_NODE event for node no_id
declare p_no_id alias for $1; p_no_comment alias for $2; p_no_spool alias for $3; v_no_spool_txt text; begin if p_no_spool then v_no_spool_txt = 't'; else v_no_spool_txt = 'f'; end if; perform "_prod_replica_set".storeNode_int (p_no_id, p_no_comment, p_no_spool); return "_prod_replica_set".createEvent('_prod_replica_set', 'STORE_NODE', p_no_id::text, p_no_comment::text, v_no_spool_txt::text); end;
no_id - Node ID # no_comment - Human-oriented comment no_spool - Flag for virtual spool nodes Internal function to process the STORE_NODE event for node no_id
declare p_no_id alias for $1; p_no_comment alias for $2; p_no_spool alias for $3; v_old_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check if the node exists -- ---- select * into v_old_row from "_prod_replica_set".sl_node where no_id = p_no_id for update; if found then -- ---- -- Node exists, update the existing row. -- ---- update "_prod_replica_set".sl_node set no_comment = p_no_comment, no_spool = p_no_spool where no_id = p_no_id; else -- ---- -- New node, insert the sl_node row -- ---- insert into "_prod_replica_set".sl_node (no_id, no_active, no_comment, no_spool) values (p_no_id, 'f', p_no_comment, p_no_spool); end if; return p_no_id; end;
FUNCTION storePath (pa_server, pa_client, pa_conninfo, pa_connretry) Generate the STORE_PATH event indicating that node pa_client can access node pa_server using DSN pa_conninfo
declare p_pa_server alias for $1; p_pa_client alias for $2; p_pa_conninfo alias for $3; p_pa_connretry alias for $4; begin perform "_prod_replica_set".storePath_int(p_pa_server, p_pa_client, p_pa_conninfo, p_pa_connretry); return "_prod_replica_set".createEvent('_prod_replica_set', 'STORE_PATH', p_pa_server::text, p_pa_client::text, p_pa_conninfo::text, p_pa_connretry::text); end;
FUNCTION storePath (pa_server, pa_client, pa_conninfo, pa_connretry) Process the STORE_PATH event indicating that node pa_client can access node pa_server using DSN pa_conninfo
declare p_pa_server alias for $1; p_pa_client alias for $2; p_pa_conninfo alias for $3; p_pa_connretry alias for $4; v_dummy int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check if the path already exists -- ---- select 1 into v_dummy from "_prod_replica_set".sl_path where pa_server = p_pa_server and pa_client = p_pa_client for update; if found then -- ---- -- Path exists, update pa_conninfo -- ---- update "_prod_replica_set".sl_path set pa_conninfo = p_pa_conninfo, pa_connretry = p_pa_connretry where pa_server = p_pa_server and pa_client = p_pa_client; else -- ---- -- New path -- -- In case we receive STORE_PATH events before we know -- about the nodes involved in this, we generate those nodes -- as pending. -- ---- if not exists (select 1 from "_prod_replica_set".sl_node where no_id = p_pa_server) then perform "_prod_replica_set".storeNode_int (p_pa_server, '<event pending>', 'f'); end if; if not exists (select 1 from "_prod_replica_set".sl_node where no_id = p_pa_client) then perform "_prod_replica_set".storeNode_int (p_pa_client, '<event pending>', 'f'); end if; insert into "_prod_replica_set".sl_path (pa_server, pa_client, pa_conninfo, pa_connretry) values (p_pa_server, p_pa_client, p_pa_conninfo, p_pa_connretry); end if; -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); return 0; end;
Generate STORE_SET event for set set_id with human readable comment set_comment
declare p_set_id alias for $1; p_set_comment alias for $2; v_local_node_id int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); insert into "_prod_replica_set".sl_set (set_id, set_origin, set_comment) values (p_set_id, v_local_node_id, p_set_comment); return "_prod_replica_set".createEvent('_prod_replica_set', 'STORE_SET', p_set_id::text, v_local_node_id::text, p_set_comment::text); end;
storeSet_int (set_id, set_origin, set_comment) Process the STORE_SET event, indicating the new set with given ID, origin node, and human readable comment.
declare p_set_id alias for $1; p_set_origin alias for $2; p_set_comment alias for $3; v_dummy int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; select 1 into v_dummy from "_prod_replica_set".sl_set where set_id = p_set_id for update; if found then update "_prod_replica_set".sl_set set set_comment = p_set_comment where set_id = p_set_id; else if not exists (select 1 from "_prod_replica_set".sl_node where no_id = p_set_origin) then perform "_prod_replica_set".storeNode_int (p_set_origin, '<event pending>', 'f'); end if; insert into "_prod_replica_set".sl_set (set_id, set_origin, set_comment) values (p_set_id, p_set_origin, p_set_comment); end if; -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table perform "_prod_replica_set".addPartialLogIndices(); return p_set_id; end;
storeTrigger (trig_tabid, trig_tgname) Submits STORE_TRIGGER event to indicate that trigger trig_tgname on replicated table trig_tabid will NOT be disabled.
declare p_trig_tabid alias for $1; p_trig_tgname alias for $2; begin perform "_prod_replica_set".storeTrigger_int(p_trig_tabid, p_trig_tgname); return "_prod_replica_set".createEvent('_prod_replica_set', 'STORE_TRIGGER', p_trig_tabid::text, p_trig_tgname::text); end;
storeTrigger_int (trig_tabid, trig_tgname) Processes STORE_TRIGGER event to make sure that trigger trig_tgname on replicated table trig_tabid is NOT disabled.
declare p_trig_tabid alias for $1; p_trig_tgname alias for $2; v_tab_altered boolean; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Get the current table status (altered or not) -- ---- select tab_altered into v_tab_altered from "_prod_replica_set".sl_table where tab_id = p_trig_tabid; if not found then -- ---- -- Not found is no hard error here, because that might -- mean that we are not subscribed to that set -- ---- return 0; end if; -- ---- -- If the table is modified for replication, restore the original state -- ---- if v_tab_altered then perform "_prod_replica_set".alterTableRestore(p_trig_tabid); end if; -- ---- -- Make sure that an entry for this trigger exists -- ---- delete from "_prod_replica_set".sl_trigger where trig_tabid = p_trig_tabid and trig_tgname = p_trig_tgname; insert into "_prod_replica_set".sl_trigger ( trig_tabid, trig_tgname ) values ( p_trig_tabid, p_trig_tgname ); -- ---- -- Put the table back into replicated state if it was -- ---- if v_tab_altered then perform "_prod_replica_set".alterTableForReplication(p_trig_tabid); end if; return p_trig_tabid; end;
subscribeSet (sub_set, sub_provider, sub_receiver, sub_forward) Makes sure that the receiver is not the provider, then stores the subscription, and publishes the SUBSCRIBE_SET event to other nodes.
declare p_sub_set alias for $1; p_sub_provider alias for $2; p_sub_receiver alias for $3; p_sub_forward alias for $4; v_set_origin int4; v_ev_seqno int8; v_rec record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that this is called on the provider node -- ---- if p_sub_provider != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: subscribeSet() must be called on provider'; end if; -- ---- -- Check that the origin and provider of the set are remote -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_sub_set; if not found then raise exception 'Slony-I: subscribeSet(): set % not found', p_sub_set; end if; if v_set_origin = p_sub_receiver then raise exception 'Slony-I: subscribeSet(): set origin and receiver cannot be identical'; end if; if p_sub_receiver = p_sub_provider then raise exception 'Slony-I: subscribeSet(): set provider and receiver cannot be identical'; end if; -- --- -- Verify that the provider is either the origin or an active subscriber -- Bug report #1362 -- --- if v_set_origin <> p_sub_provider then if not exists (select 1 from "_prod_replica_set".sl_subscribe where sub_set = p_sub_set and sub_receiver = p_sub_provider and sub_forward and sub_active) then raise exception 'Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set %', p_sub_provider, p_sub_set; end if; end if; -- ---- -- Create the SUBSCRIBE_SET event -- ---- v_ev_seqno := "_prod_replica_set".createEvent('_prod_replica_set', 'SUBSCRIBE_SET', p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, case p_sub_forward when true then 't' else 'f' end); -- ---- -- Call the internal procedure to store the subscription -- ---- perform "_prod_replica_set".subscribeSet_int(p_sub_set, p_sub_provider, p_sub_receiver, p_sub_forward); return v_ev_seqno; end;
subscribeSet_int (sub_set, sub_provider, sub_receiver, sub_forward) Internal actions for subscribing receiver sub_receiver to subscription set sub_set.
declare p_sub_set alias for $1; p_sub_provider alias for $2; p_sub_receiver alias for $3; p_sub_forward alias for $4; v_set_origin int4; v_sub_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Provider change is only allowed for active sets -- ---- if p_sub_receiver = "_prod_replica_set".getLocalNodeId('_prod_replica_set') then select sub_active into v_sub_row from "_prod_replica_set".sl_subscribe where sub_set = p_sub_set and sub_receiver = p_sub_receiver; if found then if not v_sub_row.sub_active then raise exception 'Slony-I: subscribeSet_int(): set % is not active, cannot change provider', p_sub_set; end if; end if; end if; -- ---- -- Try to change provider and/or forward for an existing subscription -- ---- update "_prod_replica_set".sl_subscribe set sub_provider = p_sub_provider, sub_forward = p_sub_forward where sub_set = p_sub_set and sub_receiver = p_sub_receiver; if found then -- ---- -- Rewrite sl_listen table -- ---- perform "_prod_replica_set".RebuildListenEntries(); return p_sub_set; end if; -- ---- -- Not found, insert a new one -- ---- if not exists (select true from "_prod_replica_set".sl_path where pa_server = p_sub_provider and pa_client = p_sub_receiver) then insert into "_prod_replica_set".sl_path (pa_server, pa_client, pa_conninfo, pa_connretry) values (p_sub_provider, p_sub_receiver, '<event pending>', 10); end if; insert into "_prod_replica_set".sl_subscribe (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) values (p_sub_set, p_sub_provider, p_sub_receiver, p_sub_forward, false); -- ---- -- If the set origin is here, then enable the subscription -- ---- select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_sub_set; if not found then raise exception 'Slony-I: subscribeSet_int(): set % not found', p_sub_set; end if; if v_set_origin = "_prod_replica_set".getLocalNodeId('_prod_replica_set') then perform "_prod_replica_set".createEvent('_prod_replica_set', 'ENABLE_SUBSCRIPTION', p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, case p_sub_forward when true then 't' else 'f' end); perform "_prod_replica_set".enableSubscription(p_sub_set, p_sub_provider, p_sub_receiver); end if; -- ---- -- Rewrite sl_listen table -- ---- perform "_prod_replica_set".RebuildListenEntries(); return p_sub_set; end;
tableAddKey (tab_fqname) - if the table has not got a column of the form _Slony-I_<clustername>_rowID, then add it as a bigint, defaulted to nextval() for a sequence created for the cluster.
declare p_tab_fqname alias for $1; v_tab_fqname_quoted text default ''; v_attkind text default ''; v_attrow record; v_have_serial bool default 'f'; begin v_tab_fqname_quoted := "_prod_replica_set".slon_quote_input(p_tab_fqname); -- -- Loop over the attributes of this relation -- and add a "v" for every user column, and a "k" -- if we find the Slony-I special serial column. -- for v_attrow in select PGA.attnum, PGA.attname from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_attribute PGA where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGN.oid = PGC.relnamespace and PGA.attrelid = PGC.oid and not PGA.attisdropped and PGA.attnum > 0 order by attnum loop if v_attrow.attname = '_Slony-I_prod_replica_set_rowID' then v_attkind := v_attkind || 'k'; v_have_serial := 't'; else v_attkind := v_attkind || 'v'; end if; end loop; -- -- A table must have at least one attribute, so not finding -- anything means the table does not exist. -- if not found then raise exception 'Slony-I: tableAddKey(): table % not found', v_tab_fqname_quoted; end if; -- -- If it does not have the special serial column, we -- have to add it. This will be only half way done. -- The function to add the table to the set must finish -- these definitions with NOT NULL and UNIQUE after -- updating all existing rows. -- if not v_have_serial then execute 'lock table ' || v_tab_fqname_quoted || ' in access exclusive mode'; execute 'alter table only ' || v_tab_fqname_quoted || ' add column "_Slony-I_prod_replica_set_rowID" bigint;'; execute 'alter table only ' || v_tab_fqname_quoted || ' alter column "_Slony-I_prod_replica_set_rowID" ' || ' set default "pg_catalog".nextval(''"_prod_replica_set".sl_rowid_seq'');'; v_attkind := v_attkind || 'k'; end if; -- -- Return the resulting Slony-I attkind -- return v_attkind; end;
tableDropKey (tab_id) If the specified table has a column "_Slony-I_<clustername>_rowID", then drop it.
declare p_tab_id alias for $1; v_tab_fqname text; v_tab_oid oid; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Construct the tables fully qualified name and get its oid -- ---- select "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname), PGC.oid into v_tab_fqname, v_tab_oid from "_prod_replica_set".sl_table T, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where T.tab_id = p_tab_id and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid; if not found then raise exception 'Slony-I: tableDropKey(): table with ID % not found', p_tab_id; end if; -- ---- -- Drop the special serial ID column if the table has it -- ---- if exists (select true from "pg_catalog".pg_attribute where attrelid = v_tab_oid and attname = '_Slony-I_prod_replica_set_rowID') then execute 'lock table ' || v_tab_fqname || ' in access exclusive mode'; execute 'alter table ' || v_tab_fqname || ' drop column "_Slony-I_prod_replica_set_rowID"'; end if; return p_tab_id; end;
tableHasSerialKey (tab_fqname) Checks if a table has our special serial key column that is used if the table has no natural unique constraint.
declare p_tab_fqname alias for $1; v_tab_fqname_quoted text default ''; v_attnum int2; begin v_tab_fqname_quoted := "_prod_replica_set".slon_quote_input(p_tab_fqname); select PGA.attnum into v_attnum from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, "pg_catalog".pg_attribute PGA where "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) = v_tab_fqname_quoted and PGC.relnamespace = PGN.oid and PGA.attrelid = PGC.oid and PGA.attname = '_Slony-I_prod_replica_set_rowID' and not PGA.attisdropped; return found; end;
terminates all backends that have registered to be from the given node
declare p_failed_node alias for $1; v_row record; begin for v_row in select nl_nodeid, nl_conncnt, nl_backendpid from "_prod_replica_set".sl_nodelock where nl_nodeid = p_failed_node for update loop perform "_prod_replica_set".killBackend(v_row.nl_backendpid, 'TERM'); delete from "_prod_replica_set".sl_nodelock where nl_nodeid = v_row.nl_nodeid and nl_conncnt = v_row.nl_conncnt; end loop; return 0; end;
Reset the whole database to standalone by removing the whole replication system.
declare v_tab_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- This is us ... time for suicide! Restore all tables to -- their original status. -- ---- for v_tab_row in select * from "_prod_replica_set".sl_table loop perform "_prod_replica_set".alterTableRestore(v_tab_row.tab_id); perform "_prod_replica_set".tableDropKey(v_tab_row.tab_id); end loop; raise notice 'Slony-I: Please drop schema "_prod_replica_set"'; return 0; end;
Remove the special trigger from all tables of a set that disables access to it.
declare p_set_id alias for $1; v_local_node_id int4; v_set_row record; v_tab_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that the set exists and that we are the origin -- and that it is not already locked. -- ---- v_local_node_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select * into v_set_row from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_row.set_origin <> v_local_node_id then raise exception 'Slony-I: set % does not originate on local node', p_set_id; end if; if v_set_row.set_locked isnull then raise exception 'Slony-I: set % is not locked', p_set_id; end if; -- ---- -- Drop the lockedSet trigger from all tables in the set. -- ---- for v_tab_row in select T.tab_id, "_prod_replica_set".slon_quote_brute(PGN.nspname) || '.' || "_prod_replica_set".slon_quote_brute(PGC.relname) as tab_fqname from "_prod_replica_set".sl_table T, "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN where T.tab_set = p_set_id and T.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid order by tab_id loop execute 'drop trigger "_prod_replica_set_lockedset_' || v_tab_row.tab_id || '" on ' || v_tab_row.tab_fqname; end loop; -- ---- -- Clear out the set_locked field -- ---- update "_prod_replica_set".sl_set set set_locked = NULL where set_id = p_set_id; return p_set_id; end;
unsubscribeSet (sub_set, sub_receiver) Unsubscribe node sub_receiver from subscription set sub_set. This is invoked on the receiver node. It verifies that this does not break any chains (e.g. - where sub_receiver is a provider for another node), then restores tables, drops Slony-specific keys, drops table entries for the set, drops the subscription, and generates an UNSUBSCRIBE_SET node to publish that the node is being dropped.
declare p_sub_set alias for $1; p_sub_receiver alias for $2; v_tab_row record; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that this is called on the receiver node -- ---- if p_sub_receiver != "_prod_replica_set".getLocalNodeId('_prod_replica_set') then raise exception 'Slony-I: unsubscribeSet() must be called on receiver'; end if; -- ---- -- Check that this does not break any chains -- ---- if exists (select true from "_prod_replica_set".sl_subscribe where sub_set = p_sub_set and sub_provider = p_sub_receiver) then raise exception 'Slony-I: Cannot unsubscribe set % while being provider', p_sub_set; end if; -- ---- -- Restore all tables original triggers and rules and remove -- our replication stuff. -- ---- for v_tab_row in select tab_id from "_prod_replica_set".sl_table where tab_set = p_sub_set order by tab_id loop perform "_prod_replica_set".alterTableRestore(v_tab_row.tab_id); perform "_prod_replica_set".tableDropKey(v_tab_row.tab_id); end loop; -- ---- -- Remove the setsync status. This will also cause the -- worker thread to ignore the set and stop replicating -- right now. -- ---- delete from "_prod_replica_set".sl_setsync where ssy_setid = p_sub_set; -- ---- -- Remove all sl_table and sl_sequence entries for this set. -- Should we ever subscribe again, the initial data -- copy process will create new ones. -- ---- delete from "_prod_replica_set".sl_table where tab_set = p_sub_set; delete from "_prod_replica_set".sl_sequence where seq_set = p_sub_set; -- ---- -- Call the internal procedure to drop the subscription -- ---- perform "_prod_replica_set".unsubscribeSet_int(p_sub_set, p_sub_receiver); -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); -- ---- -- Create the UNSUBSCRIBE_SET event -- ---- return "_prod_replica_set".createEvent('_prod_replica_set', 'UNSUBSCRIBE_SET', p_sub_set::text, p_sub_receiver::text); end;
unsubscribeSet_int (sub_set, sub_receiver) All the REAL work of removing the subscriber is done before the event is generated, so this function just has to drop the references to the subscription in sl_subscribe.
declare p_sub_set alias for $1; p_sub_receiver alias for $2; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- All the real work is done before event generation on the -- subscriber. -- ---- delete from "_prod_replica_set".sl_subscribe where sub_set = p_sub_set and sub_receiver = p_sub_receiver; -- Rewrite sl_listen table perform "_prod_replica_set".RebuildListenEntries(); return p_sub_set; end;
updateRelname(set_id, only_on_node)
declare p_set_id alias for $1; p_only_on_node alias for $2; v_no_id int4; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that we either are the set origin or a current -- subscriber of the set. -- ---- v_no_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_origin <> v_no_id and not exists (select 1 from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = v_no_id) then return 0; end if; -- ---- -- If execution on only one node is requested, check that -- we are that node. -- ---- if p_only_on_node > 0 and p_only_on_node <> v_no_id then return 0; end if; update "_prod_replica_set".sl_table set tab_relname = PGC.relname, tab_nspname = PGN.nspname from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN where "_prod_replica_set".sl_table.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid; update "_prod_replica_set".sl_sequence set seq_relname = PGC.relname, seq_nspname = PGN.nspname from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN where "_prod_replica_set".sl_sequence.seq_reloid = PGC.oid and PGC.relnamespace = PGN.oid; return p_set_id; end;
updateReloid(set_id, only_on_node) Updates the respective reloids in sl_table and sl_seqeunce based on their respective FQN
declare p_set_id alias for $1; p_only_on_node alias for $2; v_no_id int4; v_set_origin int4; begin -- ---- -- Grab the central configuration lock -- ---- lock table "_prod_replica_set".sl_config_lock; -- ---- -- Check that we either are the set origin or a current -- subscriber of the set. -- ---- v_no_id := "_prod_replica_set".getLocalNodeId('_prod_replica_set'); select set_origin into v_set_origin from "_prod_replica_set".sl_set where set_id = p_set_id for update; if not found then raise exception 'Slony-I: set % not found', p_set_id; end if; if v_set_origin <> v_no_id and not exists (select 1 from "_prod_replica_set".sl_subscribe where sub_set = p_set_id and sub_receiver = v_no_id) then return 0; end if; -- ---- -- If execution on only one node is requested, check that -- we are that node. -- ---- if p_only_on_node > 0 and p_only_on_node <> v_no_id then return 0; end if; update "_prod_replica_set".sl_table set tab_reloid = PGC.oid from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN where "_prod_replica_set".slon_quote_brute("_prod_replica_set".sl_table.tab_relname) = "_prod_replica_set".slon_quote_brute(PGC.relname) and PGC.relnamespace = PGN.oid and "_prod_replica_set".slon_quote_brute(PGN.nspname) = "_prod_replica_set".slon_quote_brute("_prod_replica_set".sl_table.tab_nspname); update "_prod_replica_set".sl_sequence set seq_reloid = PGC.oid from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN where "_prod_replica_set".slon_quote_brute("_prod_replica_set".sl_sequence.seq_relname) = "_prod_replica_set".slon_quote_brute(PGC.relname) and PGC.relnamespace = PGN.oid and "_prod_replica_set".slon_quote_brute(PGN.nspname) = "_prod_replica_set".slon_quote_brute("_prod_replica_set".sl_sequence.seq_nspname); return "_prod_replica_set".createEvent('_prod_replica_set', 'RESET_CONFIG', p_set_id::text, p_only_on_node::text); end;
Called during "update functions" by slonik to perform schema changes
declare p_old alias for $1; begin -- upgrade sl_table if p_old IN ('1.0.2', '1.0.5', '1.0.6') then -- Add new column(s) sl_table.tab_relname, sl_table.tab_nspname execute 'alter table "_prod_replica_set".sl_table add column tab_relname name'; execute 'alter table "_prod_replica_set".sl_table add column tab_nspname name'; -- populate the colums with data update "_prod_replica_set".sl_table set tab_relname = PGC.relname, tab_nspname = PGN.nspname from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN where "_prod_replica_set".sl_table.tab_reloid = PGC.oid and PGC.relnamespace = PGN.oid; -- constrain the colums execute 'alter table "_prod_replica_set".sl_table alter column tab_relname set NOT NULL'; execute 'alter table "_prod_replica_set".sl_table alter column tab_nspname set NOT NULL'; end if; -- upgrade sl_sequence if p_old IN ('1.0.2', '1.0.5', '1.0.6') then -- Add new column(s) sl_sequence.seq_relname, sl_sequence.seq_nspname execute 'alter table "_prod_replica_set".sl_sequence add column seq_relname name'; execute 'alter table "_prod_replica_set".sl_sequence add column seq_nspname name'; -- populate the columns with data update "_prod_replica_set".sl_sequence set seq_relname = PGC.relname, seq_nspname = PGN.nspname from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN where "_prod_replica_set".sl_sequence.seq_reloid = PGC.oid and PGC.relnamespace = PGN.oid; -- constrain the data execute 'alter table "_prod_replica_set".sl_sequence alter column seq_relname set NOT NULL'; execute 'alter table "_prod_replica_set".sl_sequence alter column seq_nspname set NOT NULL'; end if; -- ---- -- Changes from 1.0.x to 1.1.0 -- ---- if p_old IN ('1.0.2', '1.0.5', '1.0.6') then -- Add new column sl_node.no_spool for virtual spool nodes execute 'alter table "_prod_replica_set".sl_node add column no_spool boolean'; update "_prod_replica_set".sl_node set no_spool = false; end if; -- ---- -- Changes for 1.1.3 -- ---- if p_old IN ('1.0.2', '1.0.5', '1.0.6', '1.1.0', '1.1.1', '1.1.2') then -- Add new table sl_nodelock execute 'create table "_prod_replica_set".sl_nodelock ( nl_nodeid int4, nl_conncnt serial, nl_backendpid int4, CONSTRAINT "sl_nodelock-pkey" PRIMARY KEY (nl_nodeid, nl_conncnt) )'; -- Drop obsolete functions execute 'drop function "_prod_replica_set".terminateNodeConnections(name)'; execute 'drop function "_prod_replica_set".cleanupListener()'; execute 'drop function "_prod_replica_set".truncateTable(text)'; end if; -- ---- -- Changes for 1.2 -- ---- if p_old IN ('1.0.2', '1.0.5', '1.0.6', '1.1.0', '1.1.1', '1.1.2', '1.1.3','1.1.5', '1.1.6', '1.1.7', '1.1.8', '1.1.9') then -- Add new table sl_registry execute 'create table "_prod_replica_set".sl_registry ( reg_key text primary key, reg_int4 int4, reg_text text, reg_timestamp timestamp ) without oids'; execute 'alter table "_prod_replica_set".sl_config_lock set without oids;'; execute 'alter table "_prod_replica_set".sl_confirm set without oids;'; execute 'alter table "_prod_replica_set".sl_event set without oids;'; execute 'alter table "_prod_replica_set".sl_listen set without oids;'; execute 'alter table "_prod_replica_set".sl_node set without oids;'; execute 'alter table "_prod_replica_set".sl_nodelock set without oids;'; execute 'alter table "_prod_replica_set".sl_path set without oids;'; execute 'alter table "_prod_replica_set".sl_sequence set without oids;'; execute 'alter table "_prod_replica_set".sl_set set without oids;'; execute 'alter table "_prod_replica_set".sl_setsync set without oids;'; execute 'alter table "_prod_replica_set".sl_subscribe set without oids;'; execute 'alter table "_prod_replica_set".sl_table set without oids;'; execute 'alter table "_prod_replica_set".sl_trigger set without oids;'; end if; -- ---- -- Changes for 1.2.11 -- ---- if p_old IN ('1.0.2', '1.0.5', '1.0.6', '1.1.0', '1.1.1', '1.1.2', '1.1.3','1.1.5', '1.1.6', '1.1.7', '1.1.8', '1.1.9', '1.2.0', '1.2.1', '1.2.2', '1.2.3', '1.2.4', '1.2.5', '1.2.6', '1.2.7', '1.2.8', '1.2.9', '1.2.10') then -- Add new table sl_archive_counter execute 'create table "_prod_replica_set".sl_archive_counter ( ac_num bigint, ac_timestamp timestamp ) without oids'; execute 'insert into "_prod_replica_set".sl_archive_counter (ac_num, ac_timestamp) values (0, ''epoch''::timestamp)'; end if; -- In any version, make sure that the xxidin() functions are defined STRICT perform "_prod_replica_set".make_function_strict ('xxidin', '(cstring)'); return p_old; end;
_Slony_I_xxid_ge_snapshot
_Slony_I_xxid_lt_snapshot
_Slony_I_xxid_snapshot_in
_Slony_I_xxid_snapshot_out
_Slony_I_xxideq
_Slony_I_xxidge
_Slony_I_xxidgt
_Slony_I_xxidin
_Slony_I_xxidle
_Slony_I_xxidlt
_Slony_I_xxidne
_Slony_I_xxidout
F-Key | Name | Type | Description |
---|---|---|---|
usr_post_code | text | ||
usr_home_ou | integer | NOT NULL | |
usr_profile | integer | NOT NULL | |
usr_birth_year | integer | ||
copy_call_number | integer | NOT NULL | |
copy_location | integer | NOT NULL | |
copy_owning_lib | integer | NOT NULL | |
copy_circ_lib | integer | NOT NULL | |
copy_bib_record | bigint | NOT NULL | |
id | bigint | PRIMARY KEY | |
xact_start | timestamp with time zone | NOT NULL | |
xact_finish | timestamp with time zone | ||
target_copy | bigint | NOT NULL | |
circ_lib | integer | NOT NULL | |
circ_staff | integer | NOT NULL | |
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | NOT NULL | |
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | NOT NULL | |
recuring_fine | numeric(6,2) | ||
max_fine | numeric(6,2) | ||
phone_renewal | boolean | NOT NULL | |
desk_renewal | boolean | NOT NULL | |
opac_renewal | boolean | NOT NULL | |
duration_rule | text | NOT NULL | |
recuring_fine_rule | text | NOT NULL | |
max_fine_rule | text | NOT NULL | |
stop_fines | text | ||
create_time | timestamp with time zone |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr_post_code | text | ||
usr_home_ou | integer | ||
usr_profile | integer | ||
usr_birth_year | integer | ||
copy_call_number | bigint | ||
copy_location | integer | ||
copy_owning_lib | integer | ||
copy_circ_lib | integer | ||
copy_bib_record | bigint | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
target_copy | bigint | ||
circ_lib | integer | ||
circ_staff | integer | ||
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | ||
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
create_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | ||
recuring_fine | numeric | ||
max_fine | numeric | ||
phone_renewal | boolean | ||
desk_renewal | boolean | ||
opac_renewal | boolean | ||
duration_rule | text | ||
recuring_fine_rule | text | ||
max_fine_rule | text | ||
stop_fines | text |
SELECT aged_circulation.id , aged_circulation.usr_post_code , aged_circulation.usr_home_ou , aged_circulation.usr_profile , aged_circulation.usr_birth_year , aged_circulation.copy_call_number , aged_circulation.copy_location , aged_circulation.copy_owning_lib , aged_circulation.copy_circ_lib , aged_circulation.copy_bib_record , aged_circulation.xact_start , aged_circulation.xact_finish , aged_circulation.target_copy , aged_circulation.circ_lib , aged_circulation.circ_staff , aged_circulation.checkin_staff , aged_circulation.checkin_lib , aged_circulation.renewal_remaining , aged_circulation.due_date , aged_circulation.stop_fines_time , aged_circulation.checkin_time , aged_circulation.create_time , aged_circulation.duration , aged_circulation.fine_interval , aged_circulation.recuring_fine , aged_circulation.max_fine , aged_circulation.phone_renewal , aged_circulation.desk_renewal , aged_circulation.opac_renewal , aged_circulation.duration_rule , aged_circulation.recuring_fine_rule , aged_circulation.max_fine_rule , aged_circulation.stop_fines FROM"action".aged_circulation UNION ALL ( SELECT DISTINCT circ.id , COALESCE (a.post_code , b.post_code ) AS usr_post_code , p.home_ou AS usr_home_ou , p.profile AS usr_profile , (date_part ('year'::text , p.dob ) )::integer AS usr_birth_year , cp.call_number AS copy_call_number , cp."location" AS copy_location , cn.owning_lib AS copy_owning_lib , cp.circ_lib AS copy_circ_lib , cn.record AS copy_bib_record , circ.xact_start , circ.xact_finish , circ.target_copy , circ.circ_lib , circ.circ_staff , circ.checkin_staff , circ.checkin_lib , circ.renewal_remaining , circ.due_date , circ.stop_fines_time , circ.checkin_time , circ.create_time , circ.duration , circ.fine_interval , circ.recuring_fine , circ.max_fine , circ.phone_renewal , circ.desk_renewal , circ.opac_renewal , circ.duration_rule , circ.recuring_fine_rule , circ.max_fine_rule , circ.stop_fines FROM ( ( ( ( ("action".circulation circ JOIN asset."copy" cp ON ( (circ.target_copy = cp.id) ) ) JOIN asset.call_number cn ON ( (cp.call_number = cn.id) ) ) JOIN actor.usr p ON ( (circ.usr = p.id) ) ) LEFT JOIN actor.usr_address a ON ( (p.mailing_address = a.id) ) ) LEFT JOIN actor.usr_address b ON ( (p.billing_address = a.id) ) ) ORDER BY circ.id , COALESCE (a.post_code , b.post_code ) , p.home_ou , p.profile , (date_part ('year'::text , p.dob ) )::integer , cp.call_number , cp."location" , cn.owning_lib , cp.circ_lib , cn.record , circ.xact_start , circ.xact_finish , circ.target_copy , circ.circ_lib , circ.circ_staff , circ.checkin_staff , circ.checkin_lib , circ.renewal_remaining , circ.due_date , circ.stop_fines_time , circ.checkin_time , circ.create_time , circ.duration , circ.fine_interval , circ.recuring_fine , circ.max_fine , circ.phone_renewal , circ.desk_renewal , circ.opac_renewal , circ.duration_rule , circ.recuring_fine_rule , circ.max_fine_rule , circ.stop_fines );
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
target_copy | bigint | ||
circ_lib | integer | ||
circ_staff | integer | ||
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | ||
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | ||
recuring_fine | numeric(6,2) | ||
max_fine | numeric(6,2) | ||
phone_renewal | boolean | ||
desk_renewal | boolean | ||
opac_renewal | boolean | ||
duration_rule | text | ||
recuring_fine_rule | text | ||
max_fine_rule | text | ||
stop_fines | text |
SELECT circulation.id , circulation.usr , circulation.xact_start , circulation.xact_finish , circulation.target_copy , circulation.circ_lib , circulation.circ_staff , circulation.checkin_staff , circulation.checkin_lib , circulation.renewal_remaining , circulation.due_date , circulation.stop_fines_time , circulation.checkin_time , circulation.duration , circulation.fine_interval , circulation.recuring_fine , circulation.max_fine , circulation.phone_renewal , circulation.desk_renewal , circulation.opac_renewal , circulation.duration_rule , circulation.recuring_fine_rule , circulation.max_fine_rule , circulation.stop_fines FROM"action".circulation WHERE (circulation.xact_finish IS NULL);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.billable_xact_id_seq'::regclass) | |
actor.usr.id | usr | integer | NOT NULL |
xact_start | timestamp with time zone | NOT NULL DEFAULT now() | |
xact_finish | timestamp with time zone | ||
asset.copy.id | target_copy | bigint | NOT NULL |
actor.org_unit.id | circ_lib | integer | NOT NULL |
circ_staff | integer | NOT NULL | |
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | NOT NULL | |
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | NOT NULL DEFAULT '1 day'::interval | |
recuring_fine | numeric(6,2) | ||
max_fine | numeric(6,2) | ||
phone_renewal | boolean | NOT NULL DEFAULT false | |
desk_renewal | boolean | NOT NULL DEFAULT false | |
opac_renewal | boolean | NOT NULL DEFAULT false | |
duration_rule | text | NOT NULL | |
recuring_fine_rule | text | NOT NULL | |
max_fine_rule | text | NOT NULL | |
stop_fines | text | ||
create_time | timestamp with time zone | ||
unrecovered | boolean |
Table action.circulation Inherits billable_xact,
Name | Constraint |
---|---|
circulation_stop_fines_check | CHECK (((((((stop_fines = 'CHECKIN'::text) OR (stop_fines = 'CLAIMSRETURNED'::text)) OR (stop_fines = 'LOST'::text)) OR (stop_fines = 'MAXFINES'::text)) OR (stop_fines = 'RENEW'::text)) OR (stop_fines = 'LONGOVERDUE'::text))) |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
action.hold_request.id | hold | integer | UNIQUE#1 NOT NULL |
asset.copy.id | target_copy | bigint | UNIQUE#1 NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
action.hold_request.id | hold | integer | NOT NULL |
actor.usr.id | notify_staff | integer | |
notify_time | timestamp with time zone | NOT NULL DEFAULT now() | |
method | text | NOT NULL | |
note | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
request_time | timestamp with time zone | NOT NULL DEFAULT now() | |
capture_time | timestamp with time zone | ||
fulfillment_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
return_time | timestamp with time zone | ||
prev_check_time | timestamp with time zone | ||
expire_time | timestamp with time zone | ||
cancel_time | timestamp with time zone | ||
target | bigint | NOT NULL | |
asset.copy.id | current_copy | bigint | |
actor.usr.id | fulfillment_staff | integer | |
actor.org_unit.id | fulfillment_lib | integer | |
actor.org_unit.id | request_lib | integer | NOT NULL |
actor.usr.id | requestor | integer | NOT NULL |
actor.usr.id | usr | integer | NOT NULL |
selection_ou | integer | NOT NULL | |
selection_depth | integer | NOT NULL | |
actor.org_unit.id | pickup_lib | integer | NOT NULL |
hold_type | text | NOT NULL | |
holdable_formats | text | ||
phone_notify | text | ||
email_notify | boolean | NOT NULL DEFAULT true | |
frozen | boolean | NOT NULL DEFAULT false | |
thaw_date | timestamp with time zone |
Name | Constraint |
---|---|
hold_request_hold_type_check | CHECK (((((hold_type = 'M'::text) OR (hold_type = 'T'::text)) OR (hold_type = 'V'::text)) OR (hold_type = 'C'::text))) |
Tables referencing this one via Foreign Key Constraints:
hold_request_current_copy_idx current_copy hold_request_pickup_lib_idx pickup_lib hold_request_prev_check_time_idx prev_check_time hold_request_target_idx target hold_request_usr_idx usr
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | integer | PRIMARY KEY DEFAULT nextval('"action".transit_copy_id_seq'::regclass) | |
source_send_time | timestamp with time zone | ||
dest_recv_time | timestamp with time zone | ||
asset.copy.id | target_copy | bigint | NOT NULL |
source | integer | NOT NULL | |
dest | integer | NOT NULL | |
prev_hop | integer | ||
copy_status | integer | NOT NULL | |
persistant_transfer | boolean | NOT NULL DEFAULT false | |
action.hold_request.id | hold | integer |
Table action.hold_transit_copy Inherits transit_copy,
active_hold_transit_cp_idx target_copy active_hold_transit_dest_idx dest active_hold_transit_source_idx source
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
asset.copy.id | item | bigint | NOT NULL |
actor.usr.id | staff | integer | NOT NULL |
actor.org_unit.id | org_unit | integer | NOT NULL |
use_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
config.non_cataloged_type.id | item_type | bigint | NOT NULL |
actor.usr.id | staff | integer | NOT NULL |
actor.org_unit.id | org_unit | integer | NOT NULL |
use_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | patron | integer | NOT NULL |
actor.usr.id | staff | integer | NOT NULL |
actor.org_unit.id | circ_lib | integer | NOT NULL |
config.non_cataloged_type.id | item_type | integer | NOT NULL |
circ_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
usr | integer | ||
circ_modifier | text | ||
count | bigint |
SELECT circ.usr , cp.circ_modifier , count (circ.id) AS count FROM ("action".circulation circ JOIN asset."copy" cp ON ( (circ.target_copy = cp.id) ) ) WHERE ( (circ.checkin_time IS NULL) AND ( ( ( (circ.stop_fines = 'LOST'::text) OR (circ.stop_fines = 'LONGOVERDUE'::text ) ) OR (circ.stop_fines = 'CLAIMSRETURNED'::text) ) OR (circ.stop_fines IS NULL) ) ) GROUP BY circ.usr , cp.circ_modifier;
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
target_copy | bigint | ||
circ_lib | integer | ||
circ_staff | integer | ||
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | ||
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | ||
recuring_fine | numeric(6,2) | ||
max_fine | numeric(6,2) | ||
phone_renewal | boolean | ||
desk_renewal | boolean | ||
opac_renewal | boolean | ||
duration_rule | text | ||
recuring_fine_rule | text | ||
max_fine_rule | text | ||
stop_fines | text |
SELECT circulation.id , circulation.usr , circulation.xact_start , circulation.xact_finish , circulation.target_copy , circulation.circ_lib , circulation.circ_staff , circulation.checkin_staff , circulation.checkin_lib , circulation.renewal_remaining , circulation.due_date , circulation.stop_fines_time , circulation.checkin_time , circulation.duration , circulation.fine_interval , circulation.recuring_fine , circulation.max_fine , circulation.phone_renewal , circulation.desk_renewal , circulation.opac_renewal , circulation.duration_rule , circulation.recuring_fine_rule , circulation.max_fine_rule , circulation.stop_fines FROM"action".circulation WHERE (circulation.checkin_time IS NULL) ORDER BY circulation.due_date;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.org_unit.id | owner | integer | NOT NULL |
start_date | timestamp with time zone | NOT NULL DEFAULT now() | |
end_date | timestamp with time zone | NOT NULL DEFAULT (now() + '10 years'::interval) | |
usr_summary | boolean | NOT NULL DEFAULT false | |
opac | boolean | NOT NULL DEFAULT false | |
poll | boolean | NOT NULL DEFAULT false | |
required | boolean | NOT NULL DEFAULT false | |
name | text | NOT NULL | |
description | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
action.survey_question.id | question | integer | NOT NULL |
answer | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
action.survey.id | survey | integer | NOT NULL |
question | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
response_group_id | integer | ||
usr | integer | ||
action.survey.id | survey | integer | NOT NULL |
action.survey_question.id | question | integer | NOT NULL |
action.survey_answer.id | answer | integer | NOT NULL |
answer_date | timestamp with time zone | ||
effective_date | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
source_send_time | timestamp with time zone | ||
dest_recv_time | timestamp with time zone | ||
asset.copy.id | target_copy | bigint | NOT NULL |
actor.org_unit.id | source | integer | NOT NULL |
actor.org_unit.id | dest | integer | NOT NULL |
action.transit_copy.id | prev_hop | integer | |
config.copy_status.id | copy_status | integer | NOT NULL |
persistant_transfer | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
active_transit_cp_idx target_copy active_transit_dest_idx dest active_transit_source_idx source
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
current_copy | bigint | NOT NULL | |
hold | integer | NOT NULL | |
circ_lib | integer | NOT NULL | |
fail_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
BEGIN INSERT INTO action.aged_circulation (id,usr_post_code, usr_home_ou, usr_profile, usr_birth_year, copy_call_number, copy_location, copy_owning_lib, copy_circ_lib, copy_bib_record, xact_start, xact_finish, target_copy, circ_lib, circ_staff, checkin_staff, checkin_lib, renewal_remaining, due_date, stop_fines_time, checkin_time, create_time, duration, fine_interval, recuring_fine, max_fine, phone_renewal, desk_renewal, opac_renewal, duration_rule, recuring_fine_rule, max_fine_rule, stop_fines) SELECT id,usr_post_code, usr_home_ou, usr_profile, usr_birth_year, copy_call_number, copy_location, copy_owning_lib, copy_circ_lib, copy_bib_record, xact_start, xact_finish, target_copy, circ_lib, circ_staff, checkin_staff, checkin_lib, renewal_remaining, due_date, stop_fines_time, checkin_time, create_time, duration, fine_interval, recuring_fine, max_fine, phone_renewal, desk_renewal, opac_renewal, duration_rule, recuring_fine_rule, max_fine_rule, stop_fines FROM action.all_circulation WHERE id = OLD.id; RETURN OLD; END;
BEGIN IF OLD.stop_fines IS NULL OR OLD.stop_fines <> NEW.stop_fines THEN IF NEW.stop_fines = 'CLAIMSRETURNED' THEN UPDATE actor.usr SET claims_returned_count = claims_returned_count + 1 WHERE id = NEW.usr; END IF; IF NEW.stop_fines = 'LOST' THEN UPDATE asset.copy SET status = 3 WHERE id = NEW.target_copy; END IF; END IF; RETURN NEW; END;
DECLARE current_group permission.grp_tree%ROWTYPE; user_object actor.usr%ROWTYPE; item_object asset.copy%ROWTYPE; rec_descriptor metabib.rec_descriptor%ROWTYPE; current_mp config.circ_matrix_matchpoint%ROWTYPE; matchpoint config.circ_matrix_matchpoint%ROWTYPE; BEGIN SELECT INTO user_object * FROM actor.usr WHERE id = match_user; SELECT INTO item_object * FROM asset.copy WHERE id = match_item; SELECT INTO rec_descriptor r.* FROM metabib.rec_descriptor r JOIN asset.call_number c USING (record) WHERE c.id = item_object.call_number; SELECT INTO current_group * FROM permission.grp_tree WHERE id = user_object.profile; LOOP -- for each potential matchpoint for this ou and group ... FOR current_mp IN SELECT m.* FROM config.circ_matrix_matchpoint m JOIN actor.org_unit_ancestors( context_ou ) d ON (m.org_unit = d.id) LEFT JOIN actor.org_unit_proximity p ON (p.from_org = context_ou AND p.to_org = d.id) WHERE m.grp = current_group.id AND m.active ORDER BY CASE WHEN p.prox IS NULL THEN 999 ELSE p.prox END, CASE WHEN m.is_renewal = renewal THEN 64 ELSE 0 END + CASE WHEN m.circ_modifier IS NOT NULL THEN 32 ELSE 0 END + CASE WHEN m.marc_type IS NOT NULL THEN 16 ELSE 0 END + CASE WHEN m.marc_form IS NOT NULL THEN 8 ELSE 0 END + CASE WHEN m.marc_vr_format IS NOT NULL THEN 4 ELSE 0 END + CASE WHEN m.ref_flag IS NOT NULL THEN 2 ELSE 0 END + CASE WHEN m.usr_age_lower_bound IS NOT NULL THEN 0.5 ELSE 0 END + CASE WHEN m.usr_age_upper_bound IS NOT NULL THEN 0.5 ELSE 0 END DESC LOOP IF current_mp.circ_modifier IS NOT NULL THEN CONTINUE WHEN current_mp.circ_modifier <> item_object.circ_modifier; END IF; IF current_mp.marc_type IS NOT NULL THEN IF item_object.circ_as_type IS NOT NULL THEN CONTINUE WHEN current_mp.marc_type <> item_object.circ_as_type; ELSE CONTINUE WHEN current_mp.marc_type <> rec_descriptor.item_type; END IF; END IF; IF current_mp.marc_form IS NOT NULL THEN CONTINUE WHEN current_mp.marc_form <> rec_descriptor.item_form; END IF; IF current_mp.marc_vr_format IS NOT NULL THEN CONTINUE WHEN current_mp.marc_vr_format <> rec_descriptor.vr_format; END IF; IF current_mp.ref_flag IS NOT NULL THEN CONTINUE WHEN current_mp.ref_flag <> item_object.ref; END IF; IF current_mp.usr_age_lower_bound IS NOT NULL THEN CONTINUE WHEN user_object.dob IS NULL OR current_mp.usr_age_lower_bound < age(user_object.dob); END IF; IF current_mp.usr_age_upper_bound IS NOT NULL THEN CONTINUE WHEN user_object.dob IS NULL OR current_mp.usr_age_upper_bound > age(user_object.dob); END IF; -- everything was undefined or matched matchpoint = current_mp; EXIT WHEN matchpoint.id IS NOT NULL; END LOOP; EXIT WHEN current_group.parent IS NULL OR matchpoint.id IS NOT NULL; SELECT INTO current_group * FROM permission.grp_tree WHERE id = current_group.parent; END LOOP; RETURN matchpoint.id; END;
DECLARE current_requestor_group permission.grp_tree%ROWTYPE; root_ou actor.org_unit%ROWTYPE; requestor_object actor.usr%ROWTYPE; user_object actor.usr%ROWTYPE; item_object asset.copy%ROWTYPE; item_cn_object asset.call_number%ROWTYPE; rec_descriptor metabib.rec_descriptor%ROWTYPE; current_mp_weight FLOAT; matchpoint_weight FLOAT; tmp_weight FLOAT; current_mp config.hold_matrix_matchpoint%ROWTYPE; matchpoint config.hold_matrix_matchpoint%ROWTYPE; BEGIN SELECT INTO root_ou * FROM actor.org_unit WHERE parent_ou IS NULL; SELECT INTO user_object * FROM actor.usr WHERE id = match_user; SELECT INTO requestor_object * FROM actor.usr WHERE id = match_requestor; SELECT INTO item_object * FROM asset.copy WHERE id = match_item; SELECT INTO item_cn_object * FROM asset.call_number WHERE id = item_object.call_number; SELECT INTO rec_descriptor r.* FROM metabib.rec_descriptor r WHERE r.record = item_cn_object.record; SELECT INTO current_requestor_group * FROM permission.grp_tree WHERE id = requestor_object.profile; LOOP -- for each potential matchpoint for this ou and group ... FOR current_mp IN SELECT m.* FROM config.hold_matrix_matchpoint m WHERE m.requestor_grp = current_requestor_group.id AND m.active ORDER BY CASE WHEN m.circ_modifier IS NOT NULL THEN 16 ELSE 0 END + CASE WHEN m.marc_type IS NOT NULL THEN 8 ELSE 0 END + CASE WHEN m.marc_form IS NOT NULL THEN 4 ELSE 0 END + CASE WHEN m.marc_vr_format IS NOT NULL THEN 2 ELSE 0 END + CASE WHEN m.ref_flag IS NOT NULL THEN 1 ELSE 0 END DESC LOOP current_mp_weight := 5.0; IF current_mp.circ_modifier IS NOT NULL THEN CONTINUE WHEN current_mp.circ_modifier <> item_object.circ_modifier; END IF; IF current_mp.marc_type IS NOT NULL THEN IF item_object.circ_as_type IS NOT NULL THEN CONTINUE WHEN current_mp.marc_type <> item_object.circ_as_type; ELSE CONTINUE WHEN current_mp.marc_type <> rec_descriptor.item_type; END IF; END IF; IF current_mp.marc_form IS NOT NULL THEN CONTINUE WHEN current_mp.marc_form <> rec_descriptor.item_form; END IF; IF current_mp.marc_vr_format IS NOT NULL THEN CONTINUE WHEN current_mp.marc_vr_format <> rec_descriptor.vr_format; END IF; IF current_mp.ref_flag IS NOT NULL THEN CONTINUE WHEN current_mp.ref_flag <> item_object.ref; END IF; -- caclulate the rule match weight IF current_mp.item_owning_ou IS NOT NULL AND current_mp.item_owning_ou <> root_ou.id THEN SELECT INTO tmp_weight 1.0 / (actor.org_unit_proximity(current_mp.item_owning_ou, item_cn_object.owning_lib)::FLOAT + 1.0)::FLOAT; current_mp_weight := current_mp_weight - tmp_weight; END IF; IF current_mp.item_circ_ou IS NOT NULL AND current_mp.item_circ_ou <> root_ou.id THEN SELECT INTO tmp_weight 1.0 / (actor.org_unit_proximity(current_mp.item_circ_ou, item_object.circ_lib)::FLOAT + 1.0)::FLOAT; current_mp_weight := current_mp_weight - tmp_weight; END IF; IF current_mp.pickup_ou IS NOT NULL AND current_mp.pickup_ou <> root_ou.id THEN SELECT INTO tmp_weight 1.0 / (actor.org_unit_proximity(current_mp.pickup_ou, pickup_ou)::FLOAT + 1.0)::FLOAT; current_mp_weight := current_mp_weight - tmp_weight; END IF; IF current_mp.request_ou IS NOT NULL AND current_mp.request_ou <> root_ou.id THEN SELECT INTO tmp_weight 1.0 / (actor.org_unit_proximity(current_mp.request_ou, request_ou)::FLOAT + 1.0)::FLOAT; current_mp_weight := current_mp_weight - tmp_weight; END IF; IF current_mp.user_home_ou IS NOT NULL AND current_mp.user_home_ou <> root_ou.id THEN SELECT INTO tmp_weight 1.0 / (actor.org_unit_proximity(current_mp.user_home_ou, user_object.home_ou)::FLOAT + 1.0)::FLOAT; current_mp_weight := current_mp_weight - tmp_weight; END IF; -- set the matchpoint if we found the best one IF matchpoint_weight IS NULL OR matchpoint_weight > current_mp_weight THEN matchpoint = current_mp; matchpoint_weight = current_mp_weight; END IF; END LOOP; EXIT WHEN current_requestor_group.parent IS NULL OR matchpoint.id IS NOT NULL; SELECT INTO current_requestor_group * FROM permission.grp_tree WHERE id = current_requestor_group.parent; END LOOP; RETURN matchpoint.id; END;
DECLARE matchpoint_id INT; user_object actor.usr%ROWTYPE; age_protect_object config.rule_age_hold_protect%ROWTYPE; transit_range_ou_type actor.org_unit_type%ROWTYPE; transit_source actor.org_unit%ROWTYPE; item_object asset.copy%ROWTYPE; result action.matrix_test_result; hold_test config.hold_matrix_test%ROWTYPE; hold_count INT; hold_transit_prox INT; frozen_hold_count INT; patron_penalties INT; done BOOL := FALSE; BEGIN SELECT INTO user_object * FROM actor.usr WHERE id = match_user; -- Fail if we couldn't find a user IF user_object.id IS NULL THEN result.fail_part := 'no_user'; result.success := FALSE; done := TRUE; RETURN NEXT result; RETURN; END IF; -- Fail if user is barred IF user_object.barred IS TRUE THEN result.fail_part := 'actor.usr.barred'; result.success := FALSE; done := TRUE; RETURN NEXT result; RETURN; END IF; SELECT INTO item_object * FROM asset.copy WHERE id = match_item; -- Fail if we couldn't find a copy IF item_object.id IS NULL THEN result.fail_part := 'no_item'; result.success := FALSE; done := TRUE; RETURN NEXT result; RETURN; END IF; SELECT INTO matchpoint_id action.find_hold_matrix_matchpoint(pickup_ou, request_ou, match_item, match_user, match_requestor); -- Fail if we couldn't find any matchpoint (requires a default) IF matchpoint_id IS NULL THEN result.fail_part := 'no_matchpoint'; result.success := FALSE; done := TRUE; RETURN NEXT result; RETURN; END IF; SELECT INTO hold_test * FROM config.hold_matrix_test WHERE matchpoint = matchpoint_id; result.matchpoint := matchpoint_id; result.success := TRUE; IF hold_test.holdable IS FALSE THEN result.fail_part := 'config.hold_matrix_test.holdable'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; IF hold_test.transit_range IS NOT NULL THEN SELECT INTO transit_range_ou_type * FROM actor.org_unit_type WHERE id = hold_test.transit_range; IF hold_test.distance_is_from_owner THEN SELECT INTO transit_source ou.* FROM actor.org_unit ou JOIN asset.call_number cn ON (cn.owning_lib = ou.id) WHERE cn.id = item_object.call_number; ELSE SELECT INTO transit_source * FROM actor.org_unit WHERE id = item_object.circ_lib; END IF; PERFORM * FROM actor.org_unit_descendants( transit_source.id, transit_range_ou_type.depth ) WHERE id = pickup_ou; IF NOT FOUND THEN result.fail_part := 'transit_range'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; END IF; IF hold_test.stop_blocked_user IS TRUE THEN SELECT INTO patron_penalties COUNT(*) FROM actor.usr_standing_penalty WHERE usr = match_user; IF items_out > 0 THEN result.fail_part := 'config.hold_matrix_test.stop_blocked_user'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; END IF; IF hold_test.max_holds IS NOT NULL THEN SELECT INTO hold_count COUNT(*) FROM action.hold_request WHERE usr = match_user AND fulfillment_time IS NULL AND cancel_time IS NULL AND CASE WHEN hold_test.include_frozen_holds THEN TRUE ELSE frozen IS FALSE END; IF items_out >= hold_test.max_holds THEN result.fail_part := 'config.hold_matrix_test.max_holds'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; END IF; IF item_object.age_protect IS NOT NULL THEN SELECT INTO age_protect_object * FROM config.rule_age_hold_protect WHERE id = item_object.age_protect; IF item_object.create_date + age_protect_object.age > NOW() THEN IF hold_test.distance_is_from_owner THEN SELECT INTO hold_transit_prox prox FROM actor.org_unit_prox WHERE from_org = item_cn_object.owning_lib AND to_org = pickup_ou; ELSE SELECT INTO hold_transit_prox prox FROM actor.org_unit_prox WHERE from_org = item_object.circ_lib AND to_org = pickup_ou; END IF; IF hold_transit_prox > age_protect_object.prox THEN result.fail_part := 'config.rule_age_hold_protect.prox'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; END IF; END IF; IF NOT done THEN RETURN NEXT result; END IF; RETURN; END;
DECLARE matchpoint_id INT; user_object actor.usr%ROWTYPE; item_object asset.copy%ROWTYPE; item_status_object config.copy_status%ROWTYPE; item_location_object asset.copy_location%ROWTYPE; result action.matrix_test_result; circ_test config.circ_matrix_test%ROWTYPE; out_by_circ_mod config.circ_matrix_circ_mod_test%ROWTYPE; items_out INT; items_overdue INT; overdue_orgs INT[]; current_fines NUMERIC(8,2) := 0.0; tmp_fines NUMERIC(8,2); tmp_groc RECORD; tmp_circ RECORD; done BOOL := FALSE; BEGIN result.success := TRUE; -- Fail if the user is BARRED SELECT INTO user_object * FROM actor.usr WHERE id = match_user; -- Fail if we couldn't find a user IF user_object.id IS NULL THEN result.fail_part := 'no_user'; result.success := FALSE; done := TRUE; RETURN NEXT result; RETURN; END IF; IF user_object.barred IS TRUE THEN result.fail_part := 'actor.usr.barred'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; -- Fail if the item can't circulate SELECT INTO item_object * FROM asset.copy WHERE id = match_item; IF item_object.circulate IS FALSE THEN result.fail_part := 'asset.copy.circulate'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; -- Fail if the item isn't in a circulateable status on a non-renewal IF NOT renewal AND item_object.status NOT IN ( 0, 7, 8 ) THEN result.fail_part := 'asset.copy.status'; result.success := FALSE; done := TRUE; RETURN NEXT result; ELSIF renewal AND item_object.status <> 1 THEN result.fail_part := 'asset.copy.status'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; -- Fail if the item can't circulate because of the shelving location SELECT INTO item_location_object * FROM asset.copy_location WHERE id = item_object.location; IF item_location_object.circulate IS FALSE THEN result.fail_part := 'asset.copy_location.circulate'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; SELECT INTO matchpoint_id action.find_circ_matrix_matchpoint(circ_ou, match_item, match_user, renewal); result.matchpoint := matchpoint_id; SELECT INTO circ_test * from config.circ_matrix_test WHERE matchpoint = result.matchpoint; IF circ_test.org_depth IS NOT NULL THEN SELECT INTO overdue_orgs ARRAY_ACCUM(id) FROM actor.org_unit_descendants( circ_ou, circ_test.org_depth ); END IF; -- Fail if we couldn't find a set of tests IF result.matchpoint IS NULL THEN result.fail_part := 'no_matchpoint'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; -- Fail if the test is set to hard non-circulating IF circ_test.circulate IS FALSE THEN result.fail_part := 'config.circ_matrix_test.circulate'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; -- Fail if the user has too many items checked out IF circ_test.max_items_out IS NOT NULL THEN SELECT INTO items_out COUNT(*) FROM action.circulation WHERE usr = match_user AND (circ_test.org_depth IS NULL OR (circ_test.org_depth IS NOT NULL AND circ_lib IN ( SELECT * FROM explode_array(overdue_orgs) ))) AND checkin_time IS NULL AND (stop_fines IN ('MAXFINES','LONGOVERDUE') OR stop_fines IS NULL); IF items_out >= circ_test.max_items_out THEN result.fail_part := 'config.circ_matrix_test.max_items_out'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; END IF; -- Fail if the user has too many items with specific circ_modifiers checked out FOR out_by_circ_mod IN SELECT * FROM config.circ_matrix_circ_mod_test WHERE matchpoint = matchpoint_id LOOP SELECT INTO items_out COUNT(*) FROM action.circulation circ JOIN asset.copy cp ON (cp.id = circ.target_copy) WHERE circ.usr = match_user AND (circ_test.org_depth IS NULL OR (circ_test.org_depth IS NOT NULL AND circ_lib IN ( SELECT * FROM explode_array(overdue_orgs) ))) AND circ.checkin_time IS NULL AND (circ.stop_fines IN ('MAXFINES','LONGOVERDUE') OR circ.stop_fines IS NULL) AND cp.circ_modifier = out_by_circ_mod.circ_mod; IF items_out >= out_by_circ_mod.items_out THEN result.fail_part := 'config.circ_matrix_circ_mod_test'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; END LOOP; -- Fail if the user has too many overdue items IF circ_test.max_overdue IS NOT NULL THEN SELECT INTO items_overdue COUNT(*) FROM action.circulation WHERE usr = match_user AND (circ_test.org_depth IS NULL OR (circ_test.org_depth IS NOT NULL AND circ_lib IN ( SELECT * FROM explode_array(overdue_orgs) ))) AND checkin_time IS NULL AND due_date < NOW() AND (stop_fines IN ('MAXFINES','LONGOVERDUE') OR stop_fines IS NULL); IF items_overdue >= circ_test.max_overdue THEN result.fail_part := 'config.circ_matrix_test.max_overdue'; result.success := FALSE; done := TRUE; RETURN NEXT result; END IF; END IF; -- Fail if the user has a high fine balance IF circ_test.max_fines IS NOT NULL THEN FOR tmp_groc IN SELECT * FROM money.grocery WHERE usr = match_usr AND xact_finish IS NULL AND (circ_test.org_depth IS NULL OR (circ_test.org_depth IS NOT NULL AND billing_location IN ( SELECT * FROM explode_array(overdue_orgs) ))) LOOP SELECT INTO tmp_fines SUM( amount ) FROM money.billing WHERE xact = tmp_groc.id AND NOT voided; current_fines = current_fines + COALESCE(tmp_fines, 0.0); SELECT INTO tmp_fines SUM( amount ) FROM money.payment WHERE xact = tmp_groc.id AND NOT voided; current_fines = current_fines - COALESCE(tmp_fines, 0.0); END LOOP; FOR tmp_circ IN SELECT * FROM action.circulation WHERE usr = match_usr AND xact_finish IS NULL AND (circ_test.org_depth IS NULL OR (circ_test.org_depth IS NOT NULL AND circ_lib IN ( SELECT * FROM explode_array(overdue_orgs) ))) LOOP SELECT INTO tmp_fines SUM( amount ) FROM money.billing WHERE xact = tmp_circ.id AND NOT voided; current_fines = current_fines + COALESCE(tmp_fines, 0.0); SELECT INTO tmp_fines SUM( amount ) FROM money.payment WHERE xact = tmp_circ.id AND NOT voided; current_fines = current_fines - COALESCE(tmp_fines, 0.0); END LOOP; IF current_fines >= circ_test.max_fines THEN result.fail_part := 'config.circ_matrix_test.max_fines'; result.success := FALSE; RETURN NEXT result; done := TRUE; END IF; END IF; -- If we passed everything, return the successful matchpoint id IF NOT done THEN RETURN NEXT result; END IF; RETURN; END;
SELECT * FROM action.item_user_circ_test( $1, $2, $3, FALSE );
SELECT * FROM action.item_user_circ_test( $1, $2, $3, TRUE );
BEGIN NEW.answer_date := NOW(); RETURN NEW; END;
/*
* Copyright (C) 2005 Georgia Public Library Service
* Mike Rylander
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Library Cards * * Each User has one or more library cards. The current "main" * card is linked to here from the actor.usr table, and it is up * to the consortium policy whether more than one card can be * active for any one user at a given time. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | usr | integer | NOT NULL |
barcode | text | UNIQUE NOT NULL | |
active | boolean | NOT NULL DEFAULT true |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
actor.org_unit.id | id | integer | PRIMARY KEY |
dow_0_open | time without time zone | NOT NULL DEFAULT '09:00:00'::time without time zone | |
dow_0_close | time without time zone | NOT NULL DEFAULT '17:00:00'::time without time zone | |
dow_1_open | time without time zone | NOT NULL DEFAULT '09:00:00'::time without time zone | |
dow_1_close | time without time zone | NOT NULL DEFAULT '17:00:00'::time without time zone | |
dow_2_open | time without time zone | NOT NULL DEFAULT '09:00:00'::time without time zone | |
dow_2_close | time without time zone | NOT NULL DEFAULT '17:00:00'::time without time zone | |
dow_3_open | time without time zone | NOT NULL DEFAULT '09:00:00'::time without time zone | |
dow_3_close | time without time zone | NOT NULL DEFAULT '17:00:00'::time without time zone | |
dow_4_open | time without time zone | NOT NULL DEFAULT '09:00:00'::time without time zone | |
dow_4_close | time without time zone | NOT NULL DEFAULT '17:00:00'::time without time zone | |
dow_5_open | time without time zone | NOT NULL DEFAULT '09:00:00'::time without time zone | |
dow_5_close | time without time zone | NOT NULL DEFAULT '17:00:00'::time without time zone | |
dow_6_open | time without time zone | NOT NULL DEFAULT '09:00:00'::time without time zone | |
dow_6_close | time without time zone | NOT NULL DEFAULT '17:00:00'::time without time zone |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
valid | boolean | NOT NULL DEFAULT true | |
address_type | text | NOT NULL DEFAULT 'MAILING'::text | |
actor.org_unit.id | org_unit | integer | NOT NULL |
street1 | text | NOT NULL | |
street2 | text | ||
city | text | NOT NULL | |
county | text | ||
state | text | NOT NULL | |
country | text | NOT NULL | |
post_code | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
actor_org_address_org_unit_idx org_unit
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.org_lasso.id | lasso | integer | NOT NULL |
actor.org_unit.id | org_unit | integer | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.org_unit.id | parent_ou | integer | |
actor.org_unit_type.id | ou_type | integer | NOT NULL |
actor.org_address.id | ill_address | integer | |
actor.org_address.id | holds_address | integer | |
actor.org_address.id | mailing_address | integer | |
actor.org_address.id | billing_address | integer | |
shortname | text | NOT NULL | |
name | text | NOT NULL | |
text | |||
phone | text | ||
opac_visible | boolean | NOT NULL DEFAULT true |
Tables referencing this one via Foreign Key Constraints:
actor_org_unit_billing_address_idx billing_address actor_org_unit_holds_address_idx holds_address actor_org_unit_ill_address_idx ill_address actor_org_unit_mailing_address_idx mailing_address actor_org_unit_ou_type_idx ou_type actor_org_unit_parent_ou_idx parent_ou
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.org_unit.id | org_unit | integer | NOT NULL |
close_start | timestamp with time zone | NOT NULL | |
close_end | timestamp with time zone | NOT NULL | |
reason | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
from_org | integer | ||
to_org | integer | ||
prox | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Org Unit settings * * This table contains any arbitrary settings that a client * program would like to save for an org unit. * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.org_unit.id | org_unit | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
value | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | NOT NULL | |
opac_label | text | NOT NULL | |
depth | integer | NOT NULL | |
actor.org_unit_type.id | parent | integer | |
can_have_vols | boolean | NOT NULL DEFAULT true | |
can_have_users | boolean | NOT NULL DEFAULT true |
Tables referencing this one via Foreign Key Constraints:
actor_org_unit_type_parent_idx parent
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * User Statistical Catagories * * Local data collected about Users is placed into a Statistical * Catagory. Here's where those catagories are defined. * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.org_unit.id | owner | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
opac_visible | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * User Statistical Catagory Entries * * Local data collected about Users is placed into a Statistical * Catagory. Each library can create entries into any of it's own * stat_cats, it's anscestors stat_cats, or it's descendants' stat_cats. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.stat_cat.id | stat_cat | integer | UNIQUE#1 NOT NULL |
actor.org_unit.id | owner | integer | UNIQUE#1 NOT NULL |
value | text | UNIQUE#1 NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Statistical Catagory Entry to User map * * Records the stat_cat entries for each user. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
stat_cat_entry | text | NOT NULL | |
actor.stat_cat.id | stat_cat | integer | UNIQUE#1 NOT NULL |
actor.usr.id | target_usr | integer | UNIQUE#1 NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * User objects * * This table contains the core User objects that describe both * staff members and patrons. The difference between the two * types of users is based on the user's permissions. * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
card | integer | UNIQUE | |
profile | integer | NOT NULL | |
usrname | text | UNIQUE NOT NULL | |
text | |||
passwd | text | NOT NULL | |
standing | integer | NOT NULL DEFAULT 1 | |
config.identification_type.id | ident_type | integer | NOT NULL |
ident_value | text | ||
config.identification_type.id | ident_type2 | integer | |
ident_value2 | text | ||
config.net_access_level.id | net_access_level | integer | NOT NULL DEFAULT 1 |
photo_url | text | ||
prefix | text | ||
first_given_name | text | NOT NULL | |
second_given_name | text | ||
family_name | text | NOT NULL | |
suffix | text | ||
day_phone | text | ||
evening_phone | text | ||
other_phone | text | ||
actor.usr_address.id | mailing_address | integer | |
actor.usr_address.id | billing_address | integer | |
actor.org_unit.id | home_ou | integer | NOT NULL |
dob | timestamp with time zone | ||
active | boolean | NOT NULL DEFAULT true | |
master_account | boolean | NOT NULL DEFAULT false | |
super_user | boolean | NOT NULL DEFAULT false | |
barred | boolean | NOT NULL DEFAULT false | |
deleted | boolean | NOT NULL DEFAULT false | |
usrgroup | serial | NOT NULL | |
claims_returned_count | integer | NOT NULL | |
credit_forward_balance | numeric(6,2) | NOT NULL DEFAULT 0.00 | |
last_xact_id | text | NOT NULL DEFAULT 'none'::text | |
alert_message | text | ||
create_date | timestamp with time zone | NOT NULL DEFAULT now() | |
expire_date | timestamp with time zone | NOT NULL DEFAULT (now() + '3 years'::interval) |
Tables referencing this one via Foreign Key Constraints:
actor_usr_billing_address_idx billing_address actor_usr_day_phone_idx lower(day_phone) actor_usr_email_idx lower(email) actor_usr_evening_phone_idx lower(evening_phone) actor_usr_family_name_idx lower(family_name) actor_usr_first_given_name_idx lower(first_given_name) actor_usr_home_ou_idx home_ou actor_usr_ident_value2_idx lower(ident_value2) actor_usr_ident_value_idx lower(ident_value) actor_usr_mailing_address_idx mailing_address actor_usr_other_phone_idx lower(other_phone) actor_usr_second_given_name_idx lower(second_given_name)
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
valid | boolean | NOT NULL DEFAULT true | |
within_city_limits | boolean | NOT NULL DEFAULT true | |
address_type | text | NOT NULL DEFAULT 'MAILING'::text | |
actor.usr.id | usr | integer | NOT NULL |
street1 | text | NOT NULL | |
street2 | text | ||
city | text | NOT NULL | |
county | text | ||
state | text | NOT NULL | |
country | text | NOT NULL | |
post_code | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
actor_usr_addr_city_idx lower(city) actor_usr_addr_county_idx lower(county) actor_usr_addr_post_code_idx lower(post_code) actor_usr_addr_state_idx lower(state) actor_usr_addr_street1_idx lower(street1) actor_usr_addr_street2_idx lower(street2) actor_usr_addr_usr_idx usr
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.usr.id | usr | bigint | NOT NULL |
actor.usr.id | creator | bigint | NOT NULL |
create_date | timestamp with time zone | DEFAULT now() | |
pub | boolean | NOT NULL DEFAULT false | |
title | text | NOT NULL | |
value | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.org_unit.id | org_unit | integer | UNIQUE#1 NOT NULL |
actor.usr.id | usr | integer | UNIQUE#1 NOT NULL |
actor.usr.id | staff | integer | NOT NULL |
opt_in_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
actor.workstation.id | opt_in_ws | integer | NOT NULL |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * User settings * * This table contains any arbitrary settings that a client * program would like to save for a user. * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.usr.id | usr | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
value | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * User standing penalties * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | usr | integer | NOT NULL |
penalty_type | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
actor.org_unit.id | owning_lib | integer | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
BEGIN NEW.passwd = MD5( NEW.passwd ); RETURN NEW; END;
BEGIN IF NEW.passwd <> OLD.passwd THEN NEW.passwd = MD5( NEW.passwd ); END IF; RETURN NEW; END;
SELECT a.* FROM actor.org_unit a WHERE id = ( SELECT FIRST(x.id) FROM actor.org_unit_ancestors($1) x JOIN actor.org_unit_type y ON x.ou_type = y.id AND y.depth = $2);
SELECT a.* FROM connectby('actor.org_unit'::text,'parent_ou'::text,'id'::text,'name'::text,$1::text,100,'.'::text) AS t(keyid text, parent_keyid text, level int, branch text,pos int) JOIN actor.org_unit a ON a.id::text = t.keyid::text ORDER BY CASE WHEN a.parent_ou IS NULL THEN 0 ELSE 1 END, a.name;
SELECT * FROM actor.org_unit_ancestors($1) UNION SELECT * FROM actor.org_unit_ancestors($2);
SELECT * FROM actor.org_unit_ancestors($1) INTERSECT SELECT * FROM actor.org_unit_ancestors($2);
SELECT a.* FROM connectby('actor.org_unit'::text,'id'::text,'parent_ou'::text,'name'::text,$1::text,100,'.'::text) AS t(keyid text, parent_keyid text, level int, branch text,pos int) JOIN actor.org_unit a ON a.id::text = t.keyid::text ORDER BY CASE WHEN a.parent_ou IS NULL THEN 0 ELSE 1 END, a.name;
SELECT a.* FROM connectby('actor.org_unit'::text,'id'::text,'parent_ou'::text,'name'::text, (SELECT x.id FROM actor.org_unit_ancestors($1) x JOIN actor.org_unit_type y ON x.ou_type = y.id WHERE y.depth = $2)::text ,100,'.'::text) AS t(keyid text, parent_keyid text, level int, branch text,pos int) JOIN actor.org_unit a ON a.id::text = t.keyid::text ORDER BY CASE WHEN a.parent_ou IS NULL THEN 0 ELSE 1 END, a.name;
SELECT * FROM actor.org_unit_ancestors($1) UNION SELECT * FROM actor.org_unit_descendants($1);
SELECT * FROM actor.org_unit_full_path((actor.org_unit_ancestor_at_depth($1, $2)).id)
SELECT COUNT(id)::INT FROM ( SELECT id FROM actor.org_unit_combined_ancestors($1, $2) EXCEPT SELECT id FROM actor.org_unit_common_ancestors($1, $2) ) z;
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.usr.id | creator | bigint | NOT NULL |
create_date | timestamp with time zone | DEFAULT now() | |
actor.usr.id | editor | bigint | NOT NULL |
edit_date | timestamp with time zone | DEFAULT now() | |
biblio.record_entry.id | record | bigint | NOT NULL |
actor.org_unit.id | owning_lib | integer | NOT NULL |
label | text | NOT NULL | |
deleted | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
asset_call_number_creator_idx creator asset_call_number_dewey_idx call_number_dewey(label) asset_call_number_editor_idx editor asset_call_number_record_idx record asset_call_number_upper_label_id_owning_lib_idx upper(label), id, owning_lib
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
asset.call_number.id | call_number | bigint | NOT NULL |
actor.usr.id | creator | bigint | NOT NULL |
create_date | timestamp with time zone | DEFAULT now() | |
pub | boolean | NOT NULL DEFAULT false | |
title | text | NOT NULL | |
value | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.org_unit.id | circ_lib | integer | NOT NULL |
actor.usr.id | creator | bigint | NOT NULL |
asset.call_number.id | call_number | bigint | NOT NULL |
actor.usr.id | editor | bigint | NOT NULL |
create_date | timestamp with time zone | DEFAULT now() | |
edit_date | timestamp with time zone | DEFAULT now() | |
copy_number | integer | ||
config.copy_status.id | status | integer | NOT NULL |
asset.copy_location.id | location | integer | NOT NULL DEFAULT 1 |
loan_duration | integer | NOT NULL | |
fine_level | integer | NOT NULL | |
age_protect | integer | ||
circulate | boolean | NOT NULL DEFAULT true | |
deposit | boolean | NOT NULL DEFAULT false | |
ref | boolean | NOT NULL DEFAULT false | |
holdable | boolean | NOT NULL DEFAULT true | |
deposit_amount | numeric(6,2) | NOT NULL DEFAULT 0.00 | |
price | numeric(8,2) | ||
barcode | text | NOT NULL | |
config.circ_modifier.code | circ_modifier | text | |
circ_as_type | text | ||
dummy_title | text | ||
dummy_author | text | ||
alert_message | text | ||
opac_visible | boolean | NOT NULL DEFAULT true | |
deleted | boolean | NOT NULL DEFAULT false |
Name | Constraint |
---|---|
copy_fine_level_check | CHECK ((((fine_level = 1) OR (fine_level = 2)) OR (fine_level = 3))) |
copy_loan_duration_check | CHECK ((((loan_duration = 1) OR (loan_duration = 2)) OR (loan_duration = 3))) |
Tables referencing this one via Foreign Key Constraints:
copy_status_idx status cp_avail_cn_idx call_number
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE#1 NOT NULL | |
actor.org_unit.id | owning_lib | integer | UNIQUE#1 NOT NULL |
holdable | boolean | NOT NULL DEFAULT true | |
opac_visible | boolean | NOT NULL DEFAULT true | |
circulate | boolean | NOT NULL DEFAULT true | |
hold_verify | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
asset.copy.id | owning_copy | bigint | NOT NULL |
actor.usr.id | creator | bigint | NOT NULL |
create_date | timestamp with time zone | DEFAULT now() | |
pub | boolean | NOT NULL DEFAULT false | |
title | text | NOT NULL | |
value | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
asset.copy_transparency.id | tansparency | integer | NOT NULL |
asset.copy.id | target_copy | integer | UNIQUE NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
deposit_amount | numeric(6,2) | ||
actor.org_unit.id | owner | integer | UNIQUE#1 NOT NULL |
actor.org_unit.id | circ_lib | integer | |
loan_duration | integer | ||
fine_level | integer | ||
holdable | boolean | ||
circulate | boolean | ||
deposit | boolean | ||
ref | boolean | ||
opac_visible | boolean | ||
circ_modifier | text | ||
circ_as_type | text | ||
name | text | UNIQUE#1 NOT NULL |
Name | Constraint |
---|---|
copy_transparency_fine_level_check | CHECK ((((fine_level = 1) OR (fine_level = 2)) OR (fine_level = 3))) |
copy_transparency_loan_duration_check | CHECK ((((loan_duration = 1) OR (loan_duration = 2)) OR (loan_duration = 3))) |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.org_unit.id | owner | integer | UNIQUE#1 NOT NULL |
opac_visible | boolean | NOT NULL DEFAULT false | |
name | text | UNIQUE#1 NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
asset.stat_cat.id | stat_cat | integer | UNIQUE#1 NOT NULL |
actor.org_unit.id | owner | integer | UNIQUE#1 NOT NULL |
value | text | UNIQUE#1 NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
asset.stat_cat.id | stat_cat | integer | UNIQUE#1 NOT NULL |
asset.stat_cat_entry.id | stat_cat_entry | integer | NOT NULL |
asset.copy.id | owning_copy | bigint | UNIQUE#1 NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
stat_cat | integer | UNIQUE#1 NOT NULL | |
stat_cat_entry | integer | NOT NULL | |
owning_transparency | integer | UNIQUE#1 NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
DECLARE moved_cns INT := 0; source_cn asset.call_number%ROWTYPE; target_cn asset.call_number%ROWTYPE; BEGIN FOR source_cn IN SELECT * FROM asset.call_number WHERE record = source_record LOOP SELECT INTO target_cn * FROM asset.call_number WHERE label = source_cn.label AND owning_lib = source_cn.owning_lib AND record = target_record; IF FOUND THEN UPDATE asset.copy SET call_number = target_cn.id WHERE call_number = source_cn.id; DELETE FROM asset.call_number WHERE id = target_cn.id; ELSE UPDATE asset.call_number SET record = target_record WHERE id = source_cn.id; END IF; moved_cns := moved_cns + 1; END LOOP; RETURN moved_cns; END;
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | NOT NULL | |
audit_action | text | NOT NULL | |
id | integer | NOT NULL | |
parent_ou | integer | ||
ou_type | integer | NOT NULL | |
ill_address | integer | ||
holds_address | integer | ||
mailing_address | integer | ||
billing_address | integer | ||
shortname | text | NOT NULL | |
name | text | NOT NULL | |
text | |||
phone | text | ||
opac_visible | boolean | NOT NULL DEFAULT true |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | ||
audit_action | text | ||
id | integer | ||
parent_ou | integer | ||
ou_type | integer | ||
ill_address | integer | ||
holds_address | integer | ||
mailing_address | integer | ||
billing_address | integer | ||
shortname | text | ||
name | text | ||
text | |||
phone | text |
SELECT now () AS audit_time ,'C' AS audit_action , org_unit.id , org_unit.parent_ou , org_unit.ou_type , org_unit.ill_address , org_unit.holds_address , org_unit.mailing_address , org_unit.billing_address , org_unit.shortname , org_unit.name , org_unit.email , org_unit.phone FROM actor.org_unit UNION ALLSELECT actor_org_unit_history.audit_time , actor_org_unit_history.audit_action , actor_org_unit_history.id , actor_org_unit_history.parent_ou , actor_org_unit_history.ou_type , actor_org_unit_history.ill_address , actor_org_unit_history.holds_address , actor_org_unit_history.mailing_address , actor_org_unit_history.billing_address , actor_org_unit_history.shortname , actor_org_unit_history.name , actor_org_unit_history.email , actor_org_unit_history.phone FROM auditor.actor_org_unit_history;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | NOT NULL | |
audit_action | text | NOT NULL | |
id | integer | NOT NULL | |
valid | boolean | NOT NULL | |
within_city_limits | boolean | NOT NULL | |
address_type | text | NOT NULL | |
usr | integer | NOT NULL | |
street1 | text | NOT NULL | |
street2 | text | ||
city | text | NOT NULL | |
county | text | ||
state | text | NOT NULL | |
country | text | NOT NULL | |
post_code | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | ||
audit_action | text | ||
id | integer | ||
valid | boolean | ||
within_city_limits | boolean | ||
address_type | text | ||
usr | integer | ||
street1 | text | ||
street2 | text | ||
city | text | ||
county | text | ||
state | text | ||
country | text | ||
post_code | text |
SELECT now () AS audit_time ,'C' AS audit_action , usr_address.id , usr_address."valid" , usr_address.within_city_limits , usr_address.address_type , usr_address.usr , usr_address.street1 , usr_address.street2 , usr_address.city , usr_address.county , usr_address.state , usr_address.country , usr_address.post_code FROM actor.usr_address UNION ALLSELECT actor_usr_address_history.audit_time , actor_usr_address_history.audit_action , actor_usr_address_history.id , actor_usr_address_history."valid" , actor_usr_address_history.within_city_limits , actor_usr_address_history.address_type , actor_usr_address_history.usr , actor_usr_address_history.street1 , actor_usr_address_history.street2 , actor_usr_address_history.city , actor_usr_address_history.county , actor_usr_address_history.state , actor_usr_address_history.country , actor_usr_address_history.post_code FROM auditor.actor_usr_address_history;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | NOT NULL | |
audit_action | text | NOT NULL | |
id | integer | NOT NULL | |
card | integer | ||
profile | integer | NOT NULL | |
usrname | text | NOT NULL | |
text | |||
passwd | text | NOT NULL | |
standing | integer | NOT NULL | |
ident_type | integer | NOT NULL | |
ident_value | text | ||
ident_type2 | integer | ||
ident_value2 | text | ||
net_access_level | integer | NOT NULL | |
photo_url | text | ||
prefix | text | ||
first_given_name | text | NOT NULL | |
second_given_name | text | ||
family_name | text | NOT NULL | |
suffix | text | ||
day_phone | text | ||
evening_phone | text | ||
other_phone | text | ||
mailing_address | integer | ||
billing_address | integer | ||
home_ou | integer | NOT NULL | |
dob | timestamp with time zone | ||
active | boolean | NOT NULL | |
master_account | boolean | NOT NULL | |
super_user | boolean | NOT NULL | |
barred | boolean | NOT NULL | |
deleted | boolean | NOT NULL | |
usrgroup | integer | NOT NULL | |
claims_returned_count | integer | NOT NULL | |
credit_forward_balance | numeric(6,2) | NOT NULL | |
last_xact_id | text | NOT NULL | |
alert_message | text | ||
create_date | timestamp with time zone | NOT NULL | |
expire_date | timestamp with time zone | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | ||
audit_action | text | ||
id | integer | ||
card | integer | ||
profile | integer | ||
usrname | text | ||
text | |||
passwd | text | ||
standing | integer | ||
ident_type | integer | ||
ident_value | text | ||
ident_type2 | integer | ||
ident_value2 | text | ||
net_access_level | integer | ||
photo_url | text | ||
prefix | text | ||
first_given_name | text | ||
second_given_name | text | ||
family_name | text | ||
suffix | text | ||
day_phone | text | ||
evening_phone | text | ||
other_phone | text | ||
mailing_address | integer | ||
billing_address | integer | ||
home_ou | integer | ||
dob | timestamp with time zone | ||
active | boolean | ||
master_account | boolean | ||
super_user | boolean | ||
barred | boolean | ||
deleted | boolean | ||
usrgroup | integer | ||
claims_returned_count | integer | ||
credit_forward_balance | numeric | ||
last_xact_id | text | ||
alert_message | text | ||
create_date | timestamp with time zone | ||
expire_date | timestamp with time zone |
SELECT now () AS audit_time ,'C' AS audit_action , usr.id , usr.card , usr.profile , usr.usrname , usr.email , usr.passwd , usr.standing , usr.ident_type , usr.ident_value , usr.ident_type2 , usr.ident_value2 , usr.net_access_level , usr.photo_url , usr.prefix , usr.first_given_name , usr.second_given_name , usr.family_name , usr.suffix , usr.day_phone , usr.evening_phone , usr.other_phone , usr.mailing_address , usr.billing_address , usr.home_ou , usr.dob , usr.active , usr.master_account , usr.super_user , usr.barred , usr.deleted , usr.usrgroup , usr.claims_returned_count , usr.credit_forward_balance , usr.last_xact_id , usr.alert_message , usr.create_date , usr.expire_date FROM actor.usr UNION ALLSELECT actor_usr_history.audit_time , actor_usr_history.audit_action , actor_usr_history.id , actor_usr_history.card , actor_usr_history.profile , actor_usr_history.usrname , actor_usr_history.email , actor_usr_history.passwd , actor_usr_history.standing , actor_usr_history.ident_type , actor_usr_history.ident_value , actor_usr_history.ident_type2 , actor_usr_history.ident_value2 , actor_usr_history.net_access_level , actor_usr_history.photo_url , actor_usr_history.prefix , actor_usr_history.first_given_name , actor_usr_history.second_given_name , actor_usr_history.family_name , actor_usr_history.suffix , actor_usr_history.day_phone , actor_usr_history.evening_phone , actor_usr_history.other_phone , actor_usr_history.mailing_address , actor_usr_history.billing_address , actor_usr_history.home_ou , actor_usr_history.dob , actor_usr_history.active , actor_usr_history.master_account , actor_usr_history.super_user , actor_usr_history.barred , actor_usr_history.deleted , actor_usr_history.usrgroup , actor_usr_history.claims_returned_count , actor_usr_history.credit_forward_balance , actor_usr_history.last_xact_id , actor_usr_history.alert_message , actor_usr_history.create_date , actor_usr_history.expire_date FROM auditor.actor_usr_history;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | NOT NULL | |
audit_action | text | NOT NULL | |
id | bigint | NOT NULL | |
creator | bigint | NOT NULL | |
create_date | timestamp with time zone | ||
editor | bigint | NOT NULL | |
edit_date | timestamp with time zone | ||
record | bigint | NOT NULL | |
owning_lib | integer | NOT NULL | |
label | text | NOT NULL | |
deleted | boolean | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | ||
audit_action | text | ||
id | bigint | ||
creator | bigint | ||
create_date | timestamp with time zone | ||
editor | bigint | ||
edit_date | timestamp with time zone | ||
record | bigint | ||
owning_lib | integer | ||
label | text | ||
deleted | boolean |
SELECT now () AS audit_time ,'C' AS audit_action , call_number.id , call_number.creator , call_number.create_date , call_number.editor , call_number.edit_date , call_number.record , call_number.owning_lib , call_number.label , call_number.deleted FROM asset.call_number UNION ALLSELECT asset_call_number_history.audit_time , asset_call_number_history.audit_action , asset_call_number_history.id , asset_call_number_history.creator , asset_call_number_history.create_date , asset_call_number_history.editor , asset_call_number_history.edit_date , asset_call_number_history.record , asset_call_number_history.owning_lib , asset_call_number_history.label , asset_call_number_history.deleted FROM auditor.asset_call_number_history;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | NOT NULL | |
audit_action | text | NOT NULL | |
id | bigint | NOT NULL | |
circ_lib | integer | NOT NULL | |
creator | bigint | NOT NULL | |
call_number | bigint | NOT NULL | |
editor | bigint | NOT NULL | |
create_date | timestamp with time zone | ||
edit_date | timestamp with time zone | ||
copy_number | integer | ||
status | integer | NOT NULL | |
location | integer | NOT NULL | |
loan_duration | integer | NOT NULL | |
fine_level | integer | NOT NULL | |
age_protect | integer | ||
circulate | boolean | NOT NULL | |
deposit | boolean | NOT NULL | |
ref | boolean | NOT NULL | |
holdable | boolean | NOT NULL | |
deposit_amount | numeric(6,2) | NOT NULL | |
price | numeric(8,2) | ||
barcode | text | NOT NULL | |
circ_modifier | text | ||
circ_as_type | text | ||
dummy_title | text | ||
dummy_author | text | ||
alert_message | text | ||
opac_visible | boolean | NOT NULL | |
deleted | boolean | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | ||
audit_action | text | ||
id | bigint | ||
circ_lib | integer | ||
creator | bigint | ||
call_number | bigint | ||
editor | bigint | ||
create_date | timestamp with time zone | ||
edit_date | timestamp with time zone | ||
copy_number | integer | ||
status | integer | ||
location | integer | ||
loan_duration | integer | ||
fine_level | integer | ||
age_protect | integer | ||
circulate | boolean | ||
deposit | boolean | ||
ref | boolean | ||
holdable | boolean | ||
deposit_amount | numeric | ||
price | numeric | ||
barcode | text | ||
circ_modifier | text | ||
circ_as_type | text | ||
dummy_title | text | ||
dummy_author | text | ||
alert_message | text | ||
opac_visible | boolean | ||
deleted | boolean |
SELECT now () AS audit_time ,'C' AS audit_action ,"copy".id ,"copy".circ_lib ,"copy".creator ,"copy".call_number ,"copy".editor ,"copy".create_date ,"copy".edit_date ,"copy".copy_number ,"copy".status ,"copy"."location" ,"copy".loan_duration ,"copy".fine_level ,"copy".age_protect ,"copy".circulate ,"copy".deposit ,"copy".ref ,"copy".holdable ,"copy".deposit_amount ,"copy".price ,"copy".barcode ,"copy".circ_modifier ,"copy".circ_as_type ,"copy".dummy_title ,"copy".dummy_author ,"copy".alert_message ,"copy".opac_visible ,"copy".deleted FROM asset."copy" UNION ALLSELECT asset_copy_history.audit_time , asset_copy_history.audit_action , asset_copy_history.id , asset_copy_history.circ_lib , asset_copy_history.creator , asset_copy_history.call_number , asset_copy_history.editor , asset_copy_history.create_date , asset_copy_history.edit_date , asset_copy_history.copy_number , asset_copy_history.status , asset_copy_history."location" , asset_copy_history.loan_duration , asset_copy_history.fine_level , asset_copy_history.age_protect , asset_copy_history.circulate , asset_copy_history.deposit , asset_copy_history.ref , asset_copy_history.holdable , asset_copy_history.deposit_amount , asset_copy_history.price , asset_copy_history.barcode , asset_copy_history.circ_modifier , asset_copy_history.circ_as_type , asset_copy_history.dummy_title , asset_copy_history.dummy_author , asset_copy_history.alert_message , asset_copy_history.opac_visible , asset_copy_history.deleted FROM auditor.asset_copy_history;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | NOT NULL | |
audit_action | text | NOT NULL | |
id | bigint | NOT NULL | |
creator | integer | NOT NULL | |
editor | integer | NOT NULL | |
source | integer | ||
quality | integer | ||
create_date | timestamp with time zone | NOT NULL | |
edit_date | timestamp with time zone | NOT NULL | |
active | boolean | NOT NULL | |
deleted | boolean | NOT NULL | |
fingerprint | text | ||
tcn_source | text | NOT NULL | |
tcn_value | text | NOT NULL | |
marc | text | NOT NULL | |
last_xact_id | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audit_time | timestamp with time zone | ||
audit_action | text | ||
id | bigint | ||
creator | integer | ||
editor | integer | ||
source | integer | ||
quality | integer | ||
create_date | timestamp with time zone | ||
edit_date | timestamp with time zone | ||
active | boolean | ||
deleted | boolean | ||
fingerprint | text | ||
tcn_source | text | ||
tcn_value | text | ||
marc | text | ||
last_xact_id | text |
SELECT now () AS audit_time ,'C' AS audit_action , record_entry.id , record_entry.creator , record_entry.editor , record_entry.source , record_entry.quality , record_entry.create_date , record_entry.edit_date , record_entry.active , record_entry.deleted , record_entry.fingerprint , record_entry.tcn_source , record_entry.tcn_value , record_entry.marc , record_entry.last_xact_id FROM biblio.record_entry UNION ALLSELECT biblio_record_entry_history.audit_time , biblio_record_entry_history.audit_action , biblio_record_entry_history.id , biblio_record_entry_history.creator , biblio_record_entry_history.editor , biblio_record_entry_history.source , biblio_record_entry_history.quality , biblio_record_entry_history.create_date , biblio_record_entry_history.edit_date , biblio_record_entry_history.active , biblio_record_entry_history.deleted , biblio_record_entry_history.fingerprint , biblio_record_entry_history.tcn_source , biblio_record_entry_history.tcn_value , biblio_record_entry_history.marc , biblio_record_entry_history.last_xact_id FROM auditor.biblio_record_entry_history;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
BEGIN INSERT INTO auditor.actor_org_unit_history SELECT now(), SUBSTR(TG_OP,1,1), OLD.*; RETURN NULL; END;
BEGIN INSERT INTO auditor.actor_usr_address_history SELECT now(), SUBSTR(TG_OP,1,1), OLD.*; RETURN NULL; END;
BEGIN INSERT INTO auditor.actor_usr_history SELECT now(), SUBSTR(TG_OP,1,1), OLD.*; RETURN NULL; END;
BEGIN INSERT INTO auditor.asset_call_number_history SELECT now(), SUBSTR(TG_OP,1,1), OLD.*; RETURN NULL; END;
BEGIN INSERT INTO auditor.asset_copy_history SELECT now(), SUBSTR(TG_OP,1,1), OLD.*; RETURN NULL; END;
BEGIN INSERT INTO auditor.biblio_record_entry_history SELECT now(), SUBSTR(TG_OP,1,1), OLD.*; RETURN NULL; END;
BEGIN EXECUTE $$ CREATE TABLE auditor.$$ || sch || $$_$$ || tbl || $$_history ( audit_time TIMESTAMP WITH TIME ZONE NOT NULL, audit_action TEXT NOT NULL, LIKE $$ || sch || $$.$$ || tbl || $$ ); $$; EXECUTE $$ CREATE FUNCTION auditor.audit_$$ || sch || $$_$$ || tbl || $$_func () RETURNS TRIGGER AS $func$ BEGIN INSERT INTO auditor.$$ || sch || $$_$$ || tbl || $$_history SELECT now(), SUBSTR(TG_OP,1,1), OLD.*; RETURN NULL; END; $func$ LANGUAGE 'plpgsql'; $$; EXECUTE $$ CREATE TRIGGER audit_$$ || sch || $$_$$ || tbl || $$_update_trigger AFTER UPDATE OR DELETE ON $$ || sch || $$.$$ || tbl || $$ FOR EACH ROW EXECUTE PROCEDURE auditor.audit_$$ || sch || $$_$$ || tbl || $$_func (); $$; EXECUTE $$ CREATE VIEW auditor.$$ || sch || $$_$$ || tbl || $$_lifecycle AS SELECT now() as audit_time, 'C' as audit_action, * FROM $$ || sch || $$.$$ || tbl || $$ UNION ALL SELECT * FROM auditor.$$ || sch || $$_$$ || tbl || $$_history; $$; RETURN TRUE; END;
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
record | bigint | NOT NULL | |
tag | character(3) | NOT NULL | |
ind1 | text | ||
ind2 | text | ||
subfield | text | ||
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
record | bigint | ||
record_status | text | ||
char_encoding | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
arn_source | text | NOT NULL DEFAULT 'AUTOGEN'::text | |
arn_value | text | NOT NULL | |
creator | integer | NOT NULL DEFAULT 1 | |
editor | integer | NOT NULL DEFAULT 1 | |
create_date | timestamp with time zone | NOT NULL DEFAULT now() | |
edit_date | timestamp with time zone | NOT NULL DEFAULT now() | |
active | boolean | NOT NULL DEFAULT true | |
deleted | boolean | NOT NULL DEFAULT false | |
source | integer | ||
marc | text | NOT NULL | |
last_xact_id | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
authority_record_entry_creator_idx creator authority_record_entry_editor_idx editor
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
authority.record_entry.id | record | bigint | NOT NULL |
value | text | NOT NULL | |
creator | integer | NOT NULL DEFAULT 1 | |
editor | integer | NOT NULL DEFAULT 1 | |
create_date | timestamp with time zone | NOT NULL DEFAULT now() | |
edit_date | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
record | bigint | ||
main_id | bigint | ||
main_tag | character(3) | ||
main_value | text | ||
relationship | text | ||
use_restriction | text | ||
deprecation | text | ||
display_restriction | text | ||
link_id | bigint | ||
link_tag | character(3) | ||
link_value | text |
SELECT main.record , main.id AS main_id , main.tag AS main_tag , main.value AS main_value , substr (link.value , 1 , 1 ) AS relationship , substr (link.value , 2 , 1 ) AS use_restriction , substr (link.value , 3 , 1 ) AS deprecation , substr (link.value , 4 , 1 ) AS display_restriction , link_value.id AS link_id , link_value.tag AS link_tag , link_value.value AS link_value FROM ( (authority.full_rec main JOIN authority.full_rec link ON ( ( ( (link.record = main.record) AND ( ( (link.tag)::text = ( ( (main.tag)::integer + 400 ) )::text ) OR ( (link.tag)::text = ( ( (main.tag)::integer + 300 ) )::text ) ) ) AND (link.subfield = 'w'::text) ) ) ) JOIN authority.full_rec link_value ON ( ( ( (link_value.record = main.record) AND (link_value.tag = link.tag) ) AND (link_value.subfield = 'a'::text) ) ) ) WHERE ( ( ( ( ( ( ( ( ( ( ( (main.tag = '100'::bpchar) OR (main.tag = '110'::bpchar) ) OR (main.tag = '111'::bpchar) ) OR (main.tag = '130'::bpchar) ) OR (main.tag = '150'::bpchar) ) OR (main.tag = '151'::bpchar) ) OR (main.tag = '155'::bpchar) ) OR (main.tag = '180'::bpchar) ) OR (main.tag = '181'::bpchar) ) OR (main.tag = '182'::bpchar) ) OR (main.tag = '185'::bpchar) ) AND (main.subfield = 'a'::text) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.usr.id | creator | integer | NOT NULL DEFAULT 1 |
actor.usr.id | editor | integer | NOT NULL DEFAULT 1 |
source | integer | ||
quality | integer | ||
create_date | timestamp with time zone | NOT NULL DEFAULT now() | |
edit_date | timestamp with time zone | NOT NULL DEFAULT now() | |
active | boolean | NOT NULL DEFAULT true | |
deleted | boolean | NOT NULL DEFAULT false | |
fingerprint | text | ||
tcn_source | text | NOT NULL DEFAULT 'AUTOGEN'::text | |
tcn_value | text | NOT NULL DEFAULT biblio.next_autogen_tcn_value() | |
marc | text | NOT NULL | |
last_xact_id | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
bib_rec_create_date_idx create_date bib_rec_edit_date_idx edit_date biblio_record_entry_creator_idx creator biblio_record_entry_editor_idx editor biblio_record_entry_fp_idx fingerprint
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
biblio.record_entry.id | record | bigint | NOT NULL |
value | text | NOT NULL | |
actor.usr.id | creator | integer | NOT NULL DEFAULT 1 |
actor.usr.id | editor | integer | NOT NULL DEFAULT 1 |
pub | boolean | NOT NULL DEFAULT false | |
create_date | timestamp with time zone | NOT NULL DEFAULT now() | |
edit_date | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
BEGIN RETURN 'AUTOGENERATED_' || nextval('biblio.autogen_tcn_value_seq'::TEXT); END;
/*
* Copyright (C) 2005 Georgia Public Library Service
* Mike Rylander
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
value | text | NOT NULL | |
description | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
value | text | NOT NULL |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Valid sources of MARC records * * This is table is used to set up the relative "quality" of each * MARC source, such as OCLC. * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
quality | integer | ||
source | text | UNIQUE NOT NULL | |
transcendant | boolean | NOT NULL DEFAULT false |
Name | Constraint |
---|---|
bib_source_quality_check | CHECK (((quality >= 0) AND (quality <= 100))) |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
config.circ_matrix_matchpoint.id | matchpoint | integer | NOT NULL |
items_out | integer | NOT NULL | |
config.circ_modifier.code | circ_mod | text | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
active | boolean | NOT NULL DEFAULT true | |
actor.org_unit.id | org_unit | integer | UNIQUE#1 NOT NULL |
permission.grp_tree.id | grp | integer | UNIQUE#1 NOT NULL |
config.circ_modifier.code | circ_modifier | text | UNIQUE#1 |
config.item_type_map.code | marc_type | text | UNIQUE#1 |
config.item_form_map.code | marc_form | text | UNIQUE#1 |
config.videorecording_format_map.code | marc_vr_format | text | UNIQUE#1 |
ref_flag | boolean | UNIQUE#1 | |
is_renewal | boolean | UNIQUE#1 | |
usr_age_lower_bound | interval | UNIQUE#1 | |
usr_age_upper_bound | interval | UNIQUE#1 |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
config.circ_matrix_matchpoint.id | matchpoint | integer | PRIMARY KEY |
config.rule_circ_duration.id | duration_rule | integer | NOT NULL |
config.rule_recuring_fine.id | recurring_fine_rule | integer | NOT NULL |
config.rule_max_fine.id | max_fine_rule | integer | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
config.circ_matrix_matchpoint.id | matchpoint | integer | PRIMARY KEY |
circulate | boolean | NOT NULL DEFAULT true | |
max_items_out | integer | ||
max_overdue | integer | ||
max_fines | numeric(8,2) | ||
org_depth | integer | ||
script_test | text |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
description | text | NOT NULL | |
sip2_media_type | text | NOT NULL | |
magnetic_media | boolean | NOT NULL DEFAULT true |
Tables referencing this one via Foreign Key Constraints:
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Copy Statuses * * The available copy statuses, and whether a copy in that * status is available for hold request capture. 0 (zero) is * the only special number in this set, meaning that the item * is available for imediate checkout, and is counted as available * in the OPAC. * * Statuses with an ID below 100 are not removable, and have special * meaning in the code. Do not change them except to translate the * textual name. * * You may add and remove statuses above 100, and these can be used * to remove items from normal circulation without affecting the rest * of the copy's values or it's location. * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
holdable | boolean | NOT NULL DEFAULT false | |
opac_visible | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
active | boolean | NOT NULL DEFAULT true | |
actor.org_unit.id | user_home_ou | integer | UNIQUE#1 |
actor.org_unit.id | request_ou | integer | UNIQUE#1 |
actor.org_unit.id | pickup_ou | integer | UNIQUE#1 |
actor.org_unit.id | item_owning_ou | integer | UNIQUE#1 |
actor.org_unit.id | item_circ_ou | integer | UNIQUE#1 |
permission.grp_tree.id | usr_grp | integer | UNIQUE#1 |
permission.grp_tree.id | requestor_grp | integer | UNIQUE#1 NOT NULL |
config.circ_modifier.code | circ_modifier | text | UNIQUE#1 |
config.item_type_map.code | marc_type | text | UNIQUE#1 |
config.item_form_map.code | marc_form | text | UNIQUE#1 |
config.videorecording_format_map.code | marc_vr_format | text | UNIQUE#1 |
ref_flag | boolean |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
config.hold_matrix_matchpoint.id | matchpoint | integer | PRIMARY KEY |
holdable | boolean | NOT NULL DEFAULT true | |
distance_is_from_owner | boolean | NOT NULL DEFAULT false | |
actor.org_unit_type.id | transit_range | integer | |
max_holds | integer | ||
include_frozen_holds | boolean | NOT NULL DEFAULT true | |
stop_blocked_user | boolean | NOT NULL DEFAULT false | |
config.rule_age_hold_protect.id | age_hold_protect_rule | integer |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
fq_field | text | NOT NULL | |
identity_value | text | NOT NULL | |
config.i18n_locale.code | translation | text | NOT NULL |
string | text | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
config.language_map.code | marc_code | text | NOT NULL |
name | text | UNIQUE NOT NULL | |
description | text |
Tables referencing this one via Foreign Key Constraints:
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Types of valid patron identification. * * Each patron must display at least one valid form of identification * in order to get a library card. This table lists those forms. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
value | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
value | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
value | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
value | text | NOT NULL | |
description | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * XPath used for WoRMing * * This table contains the XPath used to chop up MODS into it's * indexable parts. Each XPath entry is named and assigned to * a "class" of either title, subject, author, keyword or series. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
field_class | text | NOT NULL | |
name | text | NOT NULL | |
xpath | text | NOT NULL | |
weight | integer | NOT NULL DEFAULT 1 | |
format | text | NOT NULL DEFAULT 'mods32'::text | |
search_field | boolean | NOT NULL DEFAULT true | |
facet_field | boolean | NOT NULL DEFAULT false |
Name | Constraint |
---|---|
metabib_field_field_class_check | CHECK ((((((lower(field_class) = 'title'::text) OR (lower(field_class) = 'author'::text)) OR (lower(field_class) = 'subject'::text)) OR (lower(field_class) = 'keyword'::text)) OR (lower(field_class) = 'series'::text))) |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Patron Network Access level * * This will be used to inform the in-library firewall of how much * internet access the using patron should be allowed. * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Types of valid non-cataloged items. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
owning_lib | integer | UNIQUE#1 NOT NULL | |
name | text | UNIQUE#1 NOT NULL | |
circ_duration | interval | NOT NULL DEFAULT '14 days'::interval | |
in_house | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Hold Item Age Protection rules * * A hold request can only capture new(ish) items when they are * within a particular proximity of the home_ou of the requesting * user. The proximity ('prox' column) is calculated by counting * the number of tree edges beween the user's home_ou and the owning_lib * of the copy that could fulfill the hold. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
age | interval | NOT NULL | |
prox | integer | NOT NULL |
Name | Constraint |
---|---|
rule_age_hold_protect_name_check | CHECK ((name ~ E'^\\w+$'::text)) |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Circulation Duration rules * * Each circulation is given a duration based on one of these rules. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
extended | interval | NOT NULL | |
normal | interval | NOT NULL | |
shrt | interval | NOT NULL | |
max_renewals | integer | NOT NULL |
Name | Constraint |
---|---|
rule_circ_duration_name_check | CHECK ((name ~ E'^\\w+$'::text)) |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Circulation Max Fine rules * * Each circulation is given a maximum fine based on one of * these rules. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
amount | numeric(6,2) | NOT NULL | |
is_percent | boolean | NOT NULL DEFAULT false |
Name | Constraint |
---|---|
rule_max_fine_name_check | CHECK ((name ~ E'^\\w+$'::text)) |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Circulation Recuring Fine rules * * Each circulation is given a recuring fine amount based on one of * these rules. The recurance_interval should not be any shorter * than the interval between runs of the fine_processor.pl script * (which is run from CRON), or you could miss fines. * * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
high | numeric(6,2) | NOT NULL | |
normal | numeric(6,2) | NOT NULL | |
low | numeric(6,2) | NOT NULL | |
recurance_interval | interval | NOT NULL DEFAULT '1 day'::interval |
Name | Constraint |
---|---|
rule_recuring_fine_name_check | CHECK ((name ~ E'^\\w+$'::text)) |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
/* * Copyright (C) 2005 Georgia Public Library Service * Mike Rylander <mrylander@gmail.com> * * Patron Standings * * This table contains the values that can be applied to a patron * by a staff member. These values should not be changed, other * that for translation, as the ID column is currently a "magic * number" in the source. :( * * **** * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
value | text | UNIQUE NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
version | text | PRIMARY KEY | |
install_date | timestamp with time zone | NOT NULL DEFAULT now() |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | PRIMARY KEY | |
value | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
name | text | PRIMARY KEY | |
namespace_uri | text | NOT NULL | |
prefix | text | NOT NULL | |
xslt | text | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
config.z3950_source.name | source | text | UNIQUE#1 NOT NULL |
name | text | NOT NULL | |
label | text | NOT NULL | |
code | integer | UNIQUE#1 NOT NULL | |
format | integer | UNIQUE#1 NOT NULL | |
truncation | integer | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
name | text | PRIMARY KEY | |
label | text | UNIQUE NOT NULL | |
host | text | NOT NULL | |
port | integer | NOT NULL | |
db | text | NOT NULL | |
record_format | text | NOT NULL DEFAULT 'FI'::text | |
transmission_format | text | NOT NULL DEFAULT 'usmarc'::text | |
auth | boolean | NOT NULL DEFAULT true |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | owner | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
btype | text | UNIQUE#1 NOT NULL DEFAULT 'misc'::text | |
pub | boolean | NOT NULL DEFAULT false | |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
container.biblio_record_entry_bucket.id | bucket | integer | NOT NULL |
biblio.record_entry.id | target_biblio_record_entry | integer | NOT NULL |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | owner | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
btype | text | UNIQUE#1 NOT NULL DEFAULT 'misc'::text | |
pub | boolean | NOT NULL DEFAULT false | |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
container.call_number_bucket.id | bucket | integer | NOT NULL |
asset.call_number.id | target_call_number | integer | NOT NULL |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | owner | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
btype | text | UNIQUE#1 NOT NULL DEFAULT 'misc'::text | |
pub | boolean | NOT NULL DEFAULT false | |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
container.copy_bucket.id | bucket | integer | NOT NULL |
asset.copy.id | target_copy | integer | NOT NULL |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | owner | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
btype | text | UNIQUE#1 NOT NULL DEFAULT 'misc'::text | |
pub | boolean | NOT NULL DEFAULT false | |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
container.user_bucket.id | bucket | integer | NOT NULL |
actor.usr.id | target_user | integer | NOT NULL |
create_time | timestamp with time zone | NOT NULL DEFAULT now() |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
biblio.record_entry.id | source | bigint | NOT NULL |
config.metabib_field.id | field | integer | NOT NULL |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
biblio.record_entry.id | record | bigint | NOT NULL |
tag | character(3) | NOT NULL | |
ind1 | text | ||
ind2 | text | ||
subfield | text | ||
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
biblio.record_entry.id | source | bigint | NOT NULL |
config.metabib_field.id | field | integer | NOT NULL |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
fingerprint | text | NOT NULL | |
biblio.record_entry.id | master_record | bigint | |
mods | text |
Tables referencing this one via Foreign Key Constraints:
metabib_metarecord_fingerprint_idx fingerprint metabib_metarecord_master_record_idx master_record
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
metabib.metarecord.id | metarecord | bigint | NOT NULL |
biblio.record_entry.id | source | bigint | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
biblio.record_entry.id | record | bigint | |
item_type | text | ||
item_form | text | ||
bib_level | text | ||
control_type | text | ||
char_encoding | text | ||
enc_level | text | ||
audience | text | ||
lit_form | text | ||
type_mat | text | ||
cat_form | text | ||
pub_status | text | ||
item_lang | text | ||
vr_format | text | ||
date1 | text | ||
date2 | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
biblio.record_entry.id | source | bigint | NOT NULL |
config.metabib_field.id | field | integer | NOT NULL |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
biblio.record_entry.id | source | bigint | NOT NULL |
config.metabib_field.id | field | integer | NOT NULL |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.usr.id | usr | integer | NOT NULL |
xact_start | timestamp with time zone | NOT NULL DEFAULT now() | |
xact_finish | timestamp with time zone | ||
unrecovered | boolean |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
total_paid | numeric | ||
last_payment_ts | timestamp with time zone | ||
last_payment_note | text | ||
last_payment_type | name | ||
total_owed | numeric | ||
last_billing_ts | timestamp with time zone | ||
last_billing_note | text | ||
last_billing_type | text | ||
balance_owed | numeric | ||
xact_type | name |
SELECT xact.id , xact.usr , xact.xact_start , xact.xact_finish , credit.amount AS total_paid , credit.payment_ts AS last_payment_ts , credit.note AS last_payment_note , credit.payment_type AS last_payment_type , debit.amount AS total_owed , debit.billing_ts AS last_billing_ts , debit.note AS last_billing_note , debit.billing_type AS last_billing_type , (COALESCE (debit.amount , (0)::numeric ) - COALESCE (credit.amount , (0)::numeric ) ) AS balance_owed , p.relname AS xact_type FROM ( ( (money.billable_xact xact JOIN pg_class p ON ( (xact.tableoid = p.oid) ) ) LEFT JOIN ( SELECT billing.xact , sum (billing.amount) AS amount , max (billing.billing_ts) AS billing_ts ,"last" (billing.note) AS note ,"last" (billing.billing_type) AS billing_type FROM money.billing WHERE (billing.voided IS FALSE) GROUP BY billing.xact ) debit ON ( (xact.id = debit.xact) ) ) LEFT JOIN ( SELECT payment_view.xact , sum (payment_view.amount) AS amount , max (payment_view.payment_ts) AS payment_ts ,"last" (payment_view.note) AS note ,"last" (payment_view.payment_type) AS payment_type FROM money.payment_view WHERE (payment_view.voided IS FALSE) GROUP BY payment_view.xact ) credit ON ( (xact.id = credit.xact) ) ) ORDER BY debit.billing_ts , credit.payment_ts;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
billing_location | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
total_paid | numeric | ||
last_payment_ts | timestamp with time zone | ||
last_payment_note | text | ||
last_payment_type | name | ||
total_owed | numeric | ||
last_billing_ts | timestamp with time zone | ||
last_billing_note | text | ||
last_billing_type | text | ||
balance_owed | numeric | ||
xact_type | name |
SELECT xact.id , xact.usr , COALESCE (circ.circ_lib , groc.billing_location ) AS billing_location , xact.xact_start , xact.xact_finish , sum (credit.amount) AS total_paid , max (credit.payment_ts) AS last_payment_ts ,"last" (credit.note) AS last_payment_note ,"last" (credit.payment_type) AS last_payment_type , sum (debit.amount) AS total_owed , max (debit.billing_ts) AS last_billing_ts ,"last" (debit.note) AS last_billing_note ,"last" (debit.billing_type) AS last_billing_type , (COALESCE (sum (debit.amount) , (0)::numeric ) - COALESCE (sum (credit.amount) , (0)::numeric ) ) AS balance_owed , p.relname AS xact_type FROM ( ( ( ( (money.billable_xact xact JOIN pg_class p ON ( (xact.tableoid = p.oid) ) ) LEFT JOIN"action".circulation circ ON ( (circ.id = xact.id) ) ) LEFT JOIN money.grocery groc ON ( (groc.id = xact.id) ) ) LEFT JOIN ( SELECT billing.xact , billing.voided , sum (billing.amount) AS amount , max (billing.billing_ts) AS billing_ts ,"last" (billing.note) AS note ,"last" (billing.billing_type) AS billing_type FROM money.billing WHERE (billing.voided IS FALSE) GROUP BY billing.xact , billing.voided ) debit ON ( ( (xact.id = debit.xact) AND (debit.voided IS FALSE) ) ) ) LEFT JOIN ( SELECT payment_view.xact , payment_view.voided , sum (payment_view.amount) AS amount , max (payment_view.payment_ts) AS payment_ts ,"last" (payment_view.note) AS note ,"last" (payment_view.payment_type) AS payment_type FROM money.payment_view WHERE (payment_view.voided IS FALSE) GROUP BY payment_view.xact , payment_view.voided ) credit ON ( ( (xact.id = credit.xact) AND (credit.voided IS FALSE) ) ) ) GROUP BY xact.id , xact.usr , COALESCE (circ.circ_lib , groc.billing_location ) , xact.xact_start , xact.xact_finish , p.relname ORDER BY max (debit.billing_ts) , max (credit.payment_ts);
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
total_paid | numeric | ||
last_payment_ts | timestamp with time zone | ||
last_payment_note | text | ||
last_payment_type | name | ||
total_owed | numeric | ||
last_billing_ts | timestamp with time zone | ||
last_billing_note | text | ||
last_billing_type | text | ||
balance_owed | numeric | ||
xact_type | name |
SELECT xact.id , xact.usr , xact.xact_start , xact.xact_finish , credit.amount AS total_paid , credit.payment_ts AS last_payment_ts , credit.note AS last_payment_note , credit.payment_type AS last_payment_type , debit.amount AS total_owed , debit.billing_ts AS last_billing_ts , debit.note AS last_billing_note , debit.billing_type AS last_billing_type , (COALESCE (debit.amount , (0)::numeric ) - COALESCE (credit.amount , (0)::numeric ) ) AS balance_owed , p.relname AS xact_type FROM ( ( (money.billable_xact xact JOIN pg_class p ON ( (xact.tableoid = p.oid) ) ) LEFT JOIN ( SELECT billing.xact , sum (billing.amount) AS amount , max (billing.billing_ts) AS billing_ts ,"last" (billing.note) AS note ,"last" (billing.billing_type) AS billing_type FROM money.billing GROUP BY billing.xact ) debit ON ( (xact.id = debit.xact) ) ) LEFT JOIN ( SELECT payment_view.xact , sum (payment_view.amount) AS amount , max (payment_view.payment_ts) AS payment_ts ,"last" (payment_view.note) AS note ,"last" (payment_view.payment_type) AS payment_type FROM money.payment_view GROUP BY payment_view.xact ) credit ON ( (xact.id = credit.xact) ) ) ORDER BY debit.billing_ts , credit.payment_ts;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
xact | bigint | NOT NULL | |
billing_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
voider | integer | ||
void_time | timestamp with time zone | ||
amount | numeric(6,2) | NOT NULL | |
billing_type | text | NOT NULL | |
note | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL | |
actor.workstation.id | cash_drawer | integer |
Table money.bnm_desk_payment Inherits bnm_payment,
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL |
Table money.bnm_payment Inherits payment,
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
xact | bigint | ||
payment_ts | timestamp with time zone | ||
voided | boolean | ||
amount | numeric(6,2) | ||
note | text | ||
amount_collected | numeric(6,2) | ||
accepting_usr | integer | ||
payment_type | name |
SELECT p.id , p.xact , p.payment_ts , p.voided , p.amount , p.note , p.amount_collected , p.accepting_usr , c.relname AS payment_type FROM (money.bnm_payment p JOIN pg_class c ON ( (p.tableoid = c.oid) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL | |
cash_drawer | integer |
Table money.cash_payment Inherits bnm_desk_payment,
money_cash_id_idx id money_cash_payment_accepting_usr_idx accepting_usr money_cash_payment_cash_drawer_idx cash_drawer money_cash_payment_ts_idx payment_ts money_cash_payment_xact_idx xact
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
org_unit | integer | ||
cashdrawer | integer | ||
payment_type | name | ||
payment_ts | timestamp with time zone | ||
amount | numeric(6,2) | ||
voided | boolean | ||
note | text |
SELECT ou.id AS org_unit , ws.id AS cashdrawer , t.payment_type , p.payment_ts , p.amount , p.voided , p.note FROM ( ( (actor.org_unit ou JOIN actor.workstation ws ON ( (ou.id = ws.owning_lib) ) ) LEFT JOIN money.bnm_desk_payment p ON ( (ws.id = p.cash_drawer) ) ) LEFT JOIN money.payment_view t ON ( (p.id = t.id) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL | |
cash_drawer | integer | ||
check_number | text | NOT NULL |
Table money.check_payment Inherits bnm_desk_payment,
money_check_id_idx id money_check_payment_accepting_usr_idx accepting_usr money_check_payment_cash_drawer_idx cash_drawer money_check_payment_ts_idx payment_ts money_check_payment_xact_idx xact
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.usr.id | usr | integer | NOT NULL |
actor.usr.id | collector | integer | NOT NULL |
actor.org_unit.id | location | integer | NOT NULL |
enter_time | timestamp with time zone |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL | |
cash_drawer | integer | ||
cc_type | text | ||
cc_number | text | ||
expire_month | integer | ||
expire_year | integer | ||
approval_code | text |
Table money.credit_card_payment Inherits bnm_desk_payment,
money_credit_card_id_idx id money_credit_card_payment_accepting_usr_idx accepting_usr money_credit_card_payment_cash_drawer_idx cash_drawer money_credit_card_payment_ts_idx payment_ts money_credit_card_payment_xact_idx xact
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL |
Table money.credit_payment Inherits bnm_payment,
money_credit_id_idx id money_credit_payment_accepting_usr_idx accepting_usr money_credit_payment_payment_ts_idx payment_ts money_credit_payment_xact_idx xact
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
xact | bigint | ||
payment_ts | timestamp with time zone | ||
voided | boolean | ||
amount | numeric(6,2) | ||
note | text | ||
amount_collected | numeric(6,2) | ||
accepting_usr | integer | ||
cash_drawer | integer | ||
payment_type | name |
SELECT p.id , p.xact , p.payment_ts , p.voided , p.amount , p.note , p.amount_collected , p.accepting_usr , p.cash_drawer , c.relname AS payment_type FROM (money.bnm_desk_payment p JOIN pg_class c ON ( (p.tableoid = c.oid) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL |
Table money.forgive_payment Inherits bnm_payment,
money_forgive_id_idx id money_forgive_payment_accepting_usr_idx accepting_usr money_forgive_payment_payment_ts_idx payment_ts money_forgive_payment_xact_idx xact
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL |
Table money.goods_payment Inherits bnm_payment,
money_goods_id_idx id money_goods_payment_accepting_usr_idx accepting_usr money_goods_payment_payment_ts_idx payment_ts money_goods_payment_xact_idx xact
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.billable_xact_id_seq'::regclass) | |
usr | integer | NOT NULL | |
xact_start | timestamp with time zone | NOT NULL DEFAULT now() | |
xact_finish | timestamp with time zone | ||
billing_location | integer | NOT NULL | |
note | text | ||
unrecovered | boolean |
Table money.grocery Inherits billable_xact,
circ_open_date_idx xact_start) WHERE (xact_finish IS NULL m_g_usr_idx usr
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
xact | bigint | ||
payment_ts | timestamp with time zone | ||
voided | boolean | ||
amount | numeric(6,2) | ||
note | text | ||
amount_collected | numeric(6,2) | ||
accepting_usr | integer | ||
payment_type | name |
SELECT p.id , p.xact , p.payment_ts , p.voided , p.amount , p.note , p.amount_collected , p.accepting_usr , c.relname AS payment_type FROM (money.bnm_payment p JOIN pg_class c ON ( (p.tableoid = c.oid) ) ) WHERE ( ( (c.relname <> 'cash_payment'::name) AND (c.relname <> 'check_payment'::name) ) AND (c.relname <> 'credit_card_payment'::name) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
billing_location | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
total_paid | numeric | ||
last_payment_ts | timestamp with time zone | ||
last_payment_note | text | ||
last_payment_type | name | ||
total_owed | numeric | ||
last_billing_ts | timestamp with time zone | ||
last_billing_note | text | ||
last_billing_type | text | ||
balance_owed | numeric | ||
xact_type | name |
SELECT xact.id , xact.usr , COALESCE (circ.circ_lib , groc.billing_location ) AS billing_location , xact.xact_start , xact.xact_finish , sum (credit.amount) AS total_paid , max (credit.payment_ts) AS last_payment_ts ,"last" (credit.note) AS last_payment_note ,"last" (credit.payment_type) AS last_payment_type , sum (debit.amount) AS total_owed , max (debit.billing_ts) AS last_billing_ts ,"last" (debit.note) AS last_billing_note ,"last" (debit.billing_type) AS last_billing_type , (COALESCE (sum (debit.amount) , (0)::numeric ) - COALESCE (sum (credit.amount) , (0)::numeric ) ) AS balance_owed , p.relname AS xact_type FROM ( ( ( ( (money.billable_xact xact JOIN pg_class p ON ( (xact.tableoid = p.oid) ) ) LEFT JOIN"action".circulation circ ON ( (circ.id = xact.id) ) ) LEFT JOIN money.grocery groc ON ( (groc.id = xact.id) ) ) LEFT JOIN ( SELECT billing.xact , billing.voided , sum (billing.amount) AS amount , max (billing.billing_ts) AS billing_ts ,"last" (billing.note) AS note ,"last" (billing.billing_type) AS billing_type FROM money.billing WHERE (billing.voided IS FALSE) GROUP BY billing.xact , billing.voided ) debit ON ( ( (xact.id = debit.xact) AND (debit.voided IS FALSE) ) ) ) LEFT JOIN ( SELECT payment_view.xact , payment_view.voided , sum (payment_view.amount) AS amount , max (payment_view.payment_ts) AS payment_ts ,"last" (payment_view.note) AS note ,"last" (payment_view.payment_type) AS payment_type FROM money.payment_view WHERE (payment_view.voided IS FALSE) GROUP BY payment_view.xact , payment_view.voided ) credit ON ( ( (xact.id = credit.xact) AND (credit.voided IS FALSE) ) ) ) WHERE (xact.xact_finish IS NULL) GROUP BY xact.id , xact.usr , COALESCE (circ.circ_lib , groc.billing_location ) , xact.xact_start , xact.xact_finish , p.relname ORDER BY max (debit.billing_ts) , max (credit.payment_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_billing_type | text | ||
last_billing_note | text | ||
last_billing_ts | timestamp with time zone | ||
total_owed | numeric |
SELECT billing.xact ,"last" (billing.billing_type) AS last_billing_type ,"last" (billing.note) AS last_billing_note , max (billing.billing_ts) AS last_billing_ts , sum (COALESCE (billing.amount , (0)::numeric ) ) AS total_owed FROM money.billing WHERE (billing.voided IS FALSE) GROUP BY billing.xact ORDER BY max (billing.billing_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_billing_type | text | ||
last_billing_note | text | ||
last_billing_ts | timestamp with time zone | ||
total_owed | numeric |
SELECT billing.xact , billing.billing_type AS last_billing_type ,"last" (billing.note) AS last_billing_note , max (billing.billing_ts) AS last_billing_ts , sum (COALESCE (billing.amount , (0)::numeric ) ) AS total_owed FROM money.billing WHERE (billing.voided IS FALSE) GROUP BY billing.xact , billing.billing_type ORDER BY max (billing.billing_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_payment_type | name | ||
last_payment_note | text | ||
last_payment_ts | timestamp with time zone | ||
total_paid | numeric |
SELECT payment_view.xact ,"last" (payment_view.payment_type) AS last_payment_type ,"last" (payment_view.note) AS last_payment_note , max (payment_view.payment_ts) AS last_payment_ts , sum (COALESCE (payment_view.amount , (0)::numeric ) ) AS total_paid FROM money.payment_view WHERE (payment_view.voided IS FALSE) GROUP BY payment_view.xact ORDER BY max (payment_view.payment_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
usr | integer | ||
total_paid | numeric | ||
total_owed | numeric | ||
balance_owed | numeric |
SELECT open_billable_xact_summary.usr , sum (open_billable_xact_summary.total_paid) AS total_paid , sum (open_billable_xact_summary.total_owed) AS total_owed , sum (open_billable_xact_summary.balance_owed) AS balance_owed FROM money.open_billable_xact_summary WHERE (open_billable_xact_summary.xact_type = 'circulation'::name) GROUP BY open_billable_xact_summary.usr;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
usr | integer | ||
total_paid | numeric | ||
total_owed | numeric | ||
balance_owed | numeric |
SELECT open_billable_xact_summary.usr , sum (open_billable_xact_summary.total_paid) AS total_paid , sum (open_billable_xact_summary.total_owed) AS total_owed , sum (open_billable_xact_summary.balance_owed) AS balance_owed FROM money.open_billable_xact_summary GROUP BY open_billable_xact_summary.usr;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
xact | bigint | ||
payment_ts | timestamp with time zone | ||
voided | boolean | ||
amount | numeric(6,2) | ||
note | text | ||
payment_type | name |
SELECT p.id , p.xact , p.payment_ts , p.voided , p.amount , p.note , c.relname AS payment_type FROM (money.payment p JOIN pg_class c ON ( (p.tableoid = c.oid) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_billing_type | text | ||
last_billing_note | text | ||
last_billing_ts | timestamp with time zone | ||
total_owed | numeric |
SELECT billing.xact ,"last" (billing.billing_type) AS last_billing_type ,"last" (billing.note) AS last_billing_note , max (billing.billing_ts) AS last_billing_ts , sum (COALESCE (billing.amount , (0)::numeric ) ) AS total_owed FROM money.billing WHERE (billing.voided IS FALSE) GROUP BY billing.xact ORDER BY max (billing.billing_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_billing_type | text | ||
last_billing_note | text | ||
last_billing_ts | timestamp with time zone | ||
total_owed | numeric |
SELECT billing.xact , billing.billing_type AS last_billing_type ,"last" (billing.note) AS last_billing_note , max (billing.billing_ts) AS last_billing_ts , sum (COALESCE (billing.amount , (0)::numeric ) ) AS total_owed FROM money.billing WHERE (billing.voided IS FALSE) GROUP BY billing.xact , billing.billing_type ORDER BY max (billing.billing_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_billing_type | text | ||
last_billing_note | text | ||
last_billing_ts | timestamp with time zone | ||
total_owed | numeric |
SELECT billing.xact ,"last" (billing.billing_type) AS last_billing_type ,"last" (billing.note) AS last_billing_note , max (billing.billing_ts) AS last_billing_ts , sum (CASE WHEN billing.voided THEN (0)::numeric ELSE COALESCE (billing.amount , (0)::numeric ) END ) AS total_owed FROM money.billing GROUP BY billing.xact ORDER BY max (billing.billing_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_payment_type | name | ||
last_payment_note | text | ||
last_payment_ts | timestamp with time zone | ||
total_paid | numeric |
SELECT payment_view.xact ,"last" (payment_view.payment_type) AS last_payment_type ,"last" (payment_view.note) AS last_payment_note , max (payment_view.payment_ts) AS last_payment_ts , sum (COALESCE (payment_view.amount , (0)::numeric ) ) AS total_paid FROM money.payment_view WHERE (payment_view.voided IS FALSE) GROUP BY payment_view.xact ORDER BY max (payment_view.payment_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
xact | bigint | ||
last_payment_type | name | ||
last_payment_note | text | ||
last_payment_ts | timestamp with time zone | ||
total_paid | numeric |
SELECT payment_view.xact ,"last" (payment_view.payment_type) AS last_payment_type ,"last" (payment_view.note) AS last_payment_note , max (payment_view.payment_ts) AS last_payment_ts , sum (CASE WHEN payment_view.voided THEN (0)::numeric ELSE COALESCE (payment_view.amount , (0)::numeric ) END ) AS total_paid FROM money.payment_view GROUP BY payment_view.xact ORDER BY max (payment_view.payment_ts);
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
usr | integer | ||
total_paid | numeric | ||
total_owed | numeric | ||
balance_owed | numeric |
SELECT billable_xact_summary.usr , sum (billable_xact_summary.total_paid) AS total_paid , sum (billable_xact_summary.total_owed) AS total_owed , sum (billable_xact_summary.balance_owed) AS balance_owed FROM money.billable_xact_summary WHERE (billable_xact_summary.xact_type = 'circulation'::name) GROUP BY billable_xact_summary.usr;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
usr | integer | ||
total_paid | numeric | ||
total_owed | numeric | ||
balance_owed | numeric |
SELECT billable_xact_summary.usr , sum (billable_xact_summary.total_paid) AS total_paid , sum (billable_xact_summary.total_owed) AS total_owed , sum (billable_xact_summary.balance_owed) AS balance_owed FROM money.billable_xact_summary GROUP BY billable_xact_summary.usr;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('money.payment_id_seq'::regclass) | |
xact | bigint | NOT NULL | |
payment_ts | timestamp with time zone | NOT NULL DEFAULT now() | |
voided | boolean | NOT NULL DEFAULT false | |
amount | numeric(6,2) | NOT NULL | |
note | text | ||
amount_collected | numeric(6,2) | NOT NULL | |
accepting_usr | integer | NOT NULL |
Table money.work_payment Inherits bnm_payment,
money_work_id_idx id money_work_payment_accepting_usr_idx accepting_usr money_work_payment_payment_ts_idx payment_ts money_work_payment_xact_idx xact
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
session | text | NOT NULL | |
requestor | integer | NOT NULL | |
create_time | integer | NOT NULL | |
workstation | text | NOT NULL | |
logfile | text | NOT NULL | |
time_delta | integer | NOT NULL | |
count | integer | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
key | text | PRIMARY KEY | |
org | integer | NOT NULL | |
description | text | ||
creator | integer | NOT NULL | |
create_time | integer | NOT NULL | |
in_process | integer | NOT NULL | |
start_time | integer | ||
end_time | integer | ||
num_complete | integer | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
permission.grp_tree.id | grp | integer | UNIQUE#1 NOT NULL |
permission.perm_list.id | perm | integer | UNIQUE#1 NOT NULL |
depth | integer | NOT NULL | |
grantable | boolean | NOT NULL DEFAULT false |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
name | text | UNIQUE NOT NULL | |
permission.grp_tree.id | parent | integer | |
usergroup | boolean | NOT NULL DEFAULT true | |
perm_interval | interval | NOT NULL DEFAULT '3 years'::interval | |
description | text | ||
application_perm | text |
Tables referencing this one via Foreign Key Constraints:
grp_tree_parent_idx parent
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
code | text | UNIQUE NOT NULL | |
description | text |
Tables referencing this one via Foreign Key Constraints:
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | usr | integer | UNIQUE#1 NOT NULL |
permission.grp_tree.id | grp | integer | UNIQUE#1 NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | usr | integer | UNIQUE#1 NOT NULL |
permission.perm_list.id | perm | integer | UNIQUE#1 NOT NULL |
object_type | text | UNIQUE#1 NOT NULL | |
object_id | text | UNIQUE#1 NOT NULL | |
grantable | boolean | NOT NULL DEFAULT false |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | usr | integer | UNIQUE#1 NOT NULL |
permission.perm_list.id | perm | integer | UNIQUE#1 NOT NULL |
depth | integer | NOT NULL | |
grantable | boolean | NOT NULL DEFAULT false |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
actor.usr.id | usr | integer | UNIQUE#1 NOT NULL |
actor.org_unit.id | work_ou | integer | UNIQUE#1 NOT NULL |
SELECT a.* FROM connectby('permission.grp_tree'::text,'parent'::text,'id'::text,'name'::text,$1::text,100,'.'::text) AS t(keyid text, parent_keyid text, level int, branch text,pos int) JOIN permission.grp_tree a ON a.id::text = t.keyid::text ORDER BY CASE WHEN a.parent IS NULL THEN 0 ELSE 1 END, a.name;
SELECT * FROM permission.grp_ancestors($1) UNION SELECT * FROM permission.grp_ancestors($2);
SELECT * FROM permission.grp_ancestors($1) INTERSECT SELECT * FROM permission.grp_ancestors($2);
SELECT a.* FROM connectby('permission.grp_tree'::text,'id'::text,'parent'::text,'name'::text,$1::text,100,'.'::text) AS t(keyid text, parent_keyid text, level int, branch text,pos int) JOIN permission.grp_tree a ON a.id::text = t.keyid::text ORDER BY CASE WHEN a.parent IS NULL THEN 0 ELSE 1 END, a.name;
SELECT * FROM permission.grp_ancestors($1) UNION SELECT * FROM permission.grp_descendants($1);
SELECT COUNT(id)::INT FROM ( SELECT id FROM permission.grp_combined_ancestors($1, $2) EXCEPT SELECT id FROM permission.grp_common_ancestors($1, $2) ) z;
DECLARE r_usr actor.usr%ROWTYPE; r_perm permission.usr_perm_map%ROWTYPE; BEGIN SELECT * INTO r_usr FROM actor.usr WHERE id = iuser; IF r_usr.active = FALSE THEN RETURN FALSE; END IF; IF r_usr.super_user = TRUE THEN RETURN TRUE; END IF; FOR r_perm IN SELECT * FROM permission.usr_perms(iuser) p JOIN permission.perm_list l ON (l.id = p.perm) WHERE (l.code = tperm AND p.grantable IS TRUE) LOOP PERFORM * FROM actor.org_unit_descendants(target_ou,r_perm.depth) WHERE id = r_usr.home_ou; IF FOUND THEN RETURN TRUE; ELSE RETURN FALSE; END IF; END LOOP; RETURN FALSE; END;
DECLARE r_usr actor.usr%ROWTYPE; r_perm permission.usr_perm_map%ROWTYPE; BEGIN SELECT * INTO r_usr FROM actor.usr WHERE id = iuser; IF r_usr.active = FALSE THEN RETURN FALSE; END IF; IF r_usr.super_user = TRUE THEN RETURN TRUE; END IF; FOR r_perm IN SELECT * FROM permission.usr_perms(iuser) p JOIN permission.perm_list l ON (l.id = p.perm) WHERE l.code = tperm OR p.perm = -1 LOOP PERFORM * FROM actor.org_unit_descendants(target_ou,r_perm.depth) WHERE id = r_usr.home_ou; IF FOUND THEN RETURN TRUE; ELSE RETURN FALSE; END IF; END LOOP; RETURN FALSE; END;
SELECT permission.usr_has_object_perm( $1, $2, $3, $4, -1 );
DECLARE r_usr actor.usr%ROWTYPE; res BOOL; BEGIN SELECT * INTO r_usr FROM actor.usr WHERE id = iuser; IF r_usr.active = FALSE THEN RETURN FALSE; END IF; IF r_usr.super_user = TRUE THEN RETURN TRUE; END IF; SELECT TRUE INTO res FROM permission.usr_object_perm_map WHERE usr = r_usr.id AND object_type = obj_type AND object_id = obj_id; IF FOUND THEN RETURN TRUE; END IF; IF target_ou > -1 THEN RETURN permission.usr_has_perm( iuser, tperm, target_ou); END IF; RETURN FALSE; END;
SELECT CASE WHEN permission.usr_has_home_perm( $1, $2, $3 ) THEN TRUE WHEN permission.usr_has_work_perm( $1, $2, $3 ) THEN TRUE ELSE FALSE END;
DECLARE r_woum permission.usr_work_ou_map%ROWTYPE; r_usr actor.usr%ROWTYPE; r_perm permission.usr_perm_map%ROWTYPE; BEGIN SELECT * INTO r_usr FROM actor.usr WHERE id = iuser; IF r_usr.active = FALSE THEN RETURN FALSE; END IF; IF r_usr.super_user = TRUE THEN RETURN TRUE; END IF; FOR r_perm IN SELECT * FROM permission.usr_perms(iuser) p JOIN permission.perm_list l ON (l.id = p.perm) WHERE l.code = tperm OR p.perm = -1 LOOP FOR r_woum IN SELECT * FROM permission.usr_work_ou_map WHERE usr = iuser LOOP PERFORM * FROM actor.org_unit_descendants(target_ou,r_perm.depth) WHERE id = r_woum.work_ou; IF FOUND THEN RETURN TRUE; END IF; END LOOP; END LOOP; RETURN FALSE; END;
SELECT DISTINCT ON (usr,perm) * FROM ( (SELECT * FROM permission.usr_perm_map WHERE usr = $1) UNION ALL (SELECT -p.id, $1 AS usr, p.perm, p.depth, p.grantable FROM permission.grp_perm_map p WHERE p.grp IN ( SELECT (permission.grp_ancestors( (SELECT profile FROM actor.usr WHERE id = $1) )).id ) ) UNION ALL (SELECT -p.id, $1 AS usr, p.perm, p.depth, p.grantable FROM permission.grp_perm_map p WHERE p.grp IN (SELECT (permission.grp_ancestors(m.grp)).id FROM permission.usr_grp_map m WHERE usr = $1)) ) AS x ORDER BY 2, 3, 1 DESC, 5 DESC ;
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
price | numeric(8,2) | ||
create_date | timestamp with time zone | ||
flip_bindery_to_available | boolean | ||
flip_bindery_to_checked_out | boolean |
F-Key | Name | Type | Description |
---|---|---|---|
converted_pines_bib_id | bigint | UNIQUE NOT NULL | |
target_pines_bib_id | bigint | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
value | text | UNIQUE NOT NULL | |
stat_cat_entry | integer |
F-Key | Name | Type | Description |
---|---|---|---|
value | text | UNIQUE NOT NULL | |
stat_cat_entry | integer |
F-Key | Name | Type | Description |
---|---|---|---|
home_location | text | UNIQUE | |
holdable | boolean | DEFAULT true | |
opac_visible | boolean | DEFAULT true | |
circulate | boolean | DEFAULT true | |
copy_location | integer | DEFAULT 1 | |
stat_cat_entry | integer |
F-Key | Name | Type | Description |
---|---|---|---|
converted_pines_bib_id | bigint | NOT NULL | |
library | text | ||
barcode | text | UNIQUE NOT NULL | |
current_location | text | ||
home_location | text | ||
call_number | text | ||
phasefx.catoosa_item_type_to_circ_modifier_map.item_type | item_type | text | |
acq_date | date | ||
price | numeric(8,2) | ||
circulate | text | ||
total_charges | text | ||
cat1 | text | ||
cat2 | text | ||
target_pines_bib_id | bigint | ||
volume_id | bigint | ||
copy_id | bigint | ||
pines_copy_status | integer | ||
copy_location | integer | ||
circ_modifier | text | ||
circulate_flag | boolean | DEFAULT true | |
legacy_item_type_stat_cat_entry | integer | ||
legacy_home_location_stat_cat_entry | integer | ||
legacy_cat1_stat_cat_entry | integer | ||
legacy_cat2_stat_cat_entry | integer |
Name | Constraint |
---|---|
catoosa_item_import_converted_pines_bib_id_check | CHECK ((converted_pines_bib_id >= 4900000)) |
F-Key | Name | Type | Description |
---|---|---|---|
item_type | text | UNIQUE | |
circ_modifier | text | ||
item_circulate_flag | boolean | DEFAULT true | |
stat_cat_entry | integer |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
location | text | UNIQUE | |
transcribed_pines_copy_status | text | DEFAULT 'Available'::text | |
config.copy_status.id | pines_copy_status | integer |
F-Key | Name | Type | Description |
---|---|---|---|
legacy_bib_id | bigint | UNIQUE NOT NULL | |
converted_pines_bib_id | bigint | NOT NULL DEFAULT -1 | |
target_pines_bib_id | bigint | NOT NULL DEFAULT -1 |
Name | Constraint |
---|---|
quitman_bib_id_map_converted_pines_bib_id_check | CHECK ((((converted_pines_bib_id >= 4800000) AND (converted_pines_bib_id < 4900000)) OR (converted_pines_bib_id = -1))) |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
converted_pines_bib_id | bigint | UNIQUE NOT NULL | |
target_pines_bib_id | bigint | NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | UNIQUE NOT NULL | |
description | text | ||
holdable | boolean | ||
opac_visible | boolean | ||
circulate | boolean | ||
circ_modifier | text | ||
copy_location | integer |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | UNIQUE NOT NULL | |
phasefx.quitman_item_types.code | item_type | text | |
location | text | NOT NULL | |
phasefx.quitman_collection_map.code | collection | text | NOT NULL |
call_number | text | NOT NULL | |
phasefx.quitman_item_status_map.code | item_status | text | NOT NULL |
price | numeric | NOT NULL | |
create_date | timestamp without time zone | NOT NULL | |
item_number | bigint | UNIQUE NOT NULL | |
phasefx.quitman_bib_id_map.legacy_bib_id | bib_number | bigint | |
target_pines_bib_id | bigint | ||
volume_id | bigint | ||
copy_id | bigint | ||
pines_copy_status | integer | ||
copy_location | integer | ||
circ_modifier | text | ||
legacy_item_type_stat_cat_entry | integer |
Name | Constraint |
---|---|
quitman_item_import_location_check | CHECK (("location" = 'QCPL'::text)) |
quitman_item_import_price_check | CHECK (((price >= (0)::numeric) AND (price <= (1000000)::numeric))) |
F-Key | Name | Type | Description |
---|---|---|---|
code | text | UNIQUE NOT NULL | |
description | text | ||
transcribed_pines_copy_status | text | ||
config.copy_status.id | pines_copy_status | integer |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
code | text | UNIQUE NOT NULL | |
description | text | ||
stat_cat_entry | integer |
Tables referencing this one via Foreign Key Constraints:
Standard public schema
F-Key | Name | Type | Description |
---|---|---|---|
code | text | ||
marc_code | text | ||
name | text | ||
description | text |
F-Key | Name | Type | Description |
---|---|---|---|
circ_mod | text |
F-Key | Name | Type | Description |
---|---|---|---|
cat_key | integer | ||
home_location | integer | ||
barcode | text | ||
price | numeric(8,2) | ||
item_type | text | ||
owning_library | integer | ||
callnum | text | ||
status | integer | ||
pubnote | text | ||
privnote | text |
F-Key | Name | Type | Description |
---|---|---|---|
cat_key | integer | ||
call_key | integer | ||
copy | integer | ||
cat1 | text | ||
cat2 | text | ||
createdate | timestamp with time zone | ||
home_location | text | ||
barcode | text | ||
price | integer | ||
item_type | text | ||
owning_library | text | ||
shadow | boolean | ||
callnum | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
catkey | integer | ||
itemkey | integer | ||
callnum | text | ||
cat1 | text | ||
cat2 | text | ||
createdate | date | ||
home_location | text | ||
barcode | text | ||
price | numeric(8,2) | ||
item_type | text | ||
owning_lib | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
callnum | text | ||
catkey | integer | ||
callkey | integer | ||
itemkey | integer | ||
cat1 | text | ||
cat2 | text | ||
createdate | date | ||
home_location | text | ||
barcode | text | ||
price | integer | ||
item_type | text | ||
owning_lib | text | ||
shadow | boolean |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | integer | ||
target | bigint | ||
current_copy | bigint | ||
hold_type | text | ||
pickup_lib | integer | ||
selection_ou | integer | ||
selection_depth | integer | ||
request_time | date | ||
capture_time | timestamp with time zone | ||
request_lib | integer | ||
requestor | integer | ||
usr | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
item_id | text | ||
item_key | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | integer | ||
target | bigint | ||
current_copy | bigint | ||
hold_type | text | ||
pickup_lib | integer | ||
selection_ou | integer | ||
selection_depth | integer | ||
request_time | date | ||
capture_time | timestamp with time zone | ||
request_lib | integer | ||
requestor | integer | ||
usr | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | integer | ||
dob | timestamp with time zone | ||
general_division | text |
SELECT u.id , u.dob , CASE WHEN (u.dob IS NULL) THEN 'Adult'::text WHEN (age (u.dob) > '18 years'::interval ) THEN 'Adult'::text ELSE 'Juvenile'::text END AS general_division FROM actor.usr u;
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
target_copy | bigint | ||
circ_lib | integer | ||
circ_staff | integer | ||
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | ||
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | ||
recuring_fine | numeric(6,2) | ||
max_fine | numeric(6,2) | ||
phone_renewal | boolean | ||
desk_renewal | boolean | ||
opac_renewal | boolean | ||
duration_rule | text | ||
recuring_fine_rule | text | ||
max_fine_rule | text | ||
stop_fines | text |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
target_copy | bigint | ||
circ_lib | integer | ||
circ_staff | integer | ||
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | ||
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | ||
recuring_fine | numeric(6,2) | ||
max_fine | numeric(6,2) | ||
phone_renewal | boolean | ||
desk_renewal | boolean | ||
opac_renewal | boolean | ||
duration_rule | text | ||
recuring_fine_rule | text | ||
max_fine_rule | text | ||
stop_fines | text |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
lib | text | ||
amount | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
recuring_fine | numeric(6,2) | ||
max_fine | numeric |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
numeric | numeric(8,2) |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
cat_key | integer | ||
home_location | integer | ||
barcode | text | ||
price | numeric(8,2) | ||
item_type | text | ||
owning_library | integer | ||
callnum | text | ||
status | integer | ||
pubnote | text | ||
privnote | text |
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
id | integer | ||
type | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
bill | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
call_num | text | ||
cat_key | integer | ||
call_key | integer | ||
shadow | boolean |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
charge_date | text | ||
due_date | text | ||
renewal_date | text | ||
charge_key1 | integer | ||
charge_key2 | integer | ||
charge_key3 | integer | ||
charge_key4 | integer | ||
user_key | integer | ||
overdue | boolean | ||
library | text | ||
claim_return_date | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | integer | ||
name | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
user_key | integer | ||
text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
available | text | ||
status | text | ||
notified | date | ||
num_of_notices | integer | ||
cat_key | integer | ||
call_key | integer | ||
item_key | integer | ||
hold_key | integer | ||
user_key | integer | ||
hold_date | date | ||
hold_range | text | ||
pickup_lib | text | ||
placing_lib | text | ||
owning_lib | text | ||
inactive_date | date | ||
inactive_reason | text | ||
hold_level | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
lib | integer | ||
item | bigint |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
lib | text | ||
max_fine | numeric(6,2) |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
circ | boolean |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
profile | text | ||
lib | text | ||
barcode | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
cnt | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
lib | text | ||
title | text | ||
author | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
lib | text | ||
amount | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
lib | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
barcode | text | ||
cnt | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
profile | text | ||
item | bigint |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
destination_lib | text | ||
owning_lib | text | ||
starting_lib | text | ||
transit_date | timestamp with time zone | ||
transit_reason | text | ||
cat_key | integer | ||
call_key | integer | ||
item_key | integer | ||
hold_key | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
item_type | text | ||
recuring_fine | numeric(6,2) | ||
renewals | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
creator | integer | ||
editor | integer | ||
source | integer | ||
quality | integer | ||
create_date | timestamp with time zone | ||
edit_date | timestamp with time zone | ||
active | boolean | ||
deleted | boolean | ||
fingerprint | text | ||
tcn_source | text | ||
tcn_value | text | ||
marc | text | ||
last_xact_id | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
circ_modifier | text |
F-Key | Name | Type | Description |
---|---|---|---|
ts_name | text | PRIMARY KEY | |
prs_name | text | NOT NULL | |
locale | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
ts_name | text | PRIMARY KEY | |
tok_alias | text | PRIMARY KEY | |
dict_name | text[] |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
dict_name | text | PRIMARY KEY | |
dict_init | regprocedure | ||
dict_initoption | text | ||
dict_lexize | regprocedure | NOT NULL | |
dict_comment | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
prs_name | text | PRIMARY KEY | |
prs_start | regprocedure | NOT NULL | |
prs_nexttoken | regprocedure | NOT NULL | |
prs_end | regprocedure | NOT NULL | |
prs_headline | regprocedure | NOT NULL | |
prs_lextype | regprocedure | NOT NULL | |
prs_comment | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
cat_1 | text | ||
creation_date | date | ||
cat_2 | text | ||
current_location | text | ||
item_id | text | ||
cat_key | integer | ||
call_key | integer | ||
item_key | integer | ||
price | numeric(8,2) | ||
item_type | text | ||
owning_library | text | ||
shadow | boolean | ||
item_comment | text | ||
last_import_date | date | ||
home_location | text | ||
call_num | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
cat_1 | text | ||
creation_date | date | ||
cat_2 | text | ||
current_location | text | ||
item_id | text | ||
cat_key | integer | ||
call_key | integer | ||
item_key | integer | ||
price | numeric(8,2) | ||
item_type | text | ||
owning_library | text | ||
shadow | boolean | ||
item_comment | text | ||
last_import_date | date | ||
home_location | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
total_paid | numeric | ||
last_payment_ts | timestamp with time zone | ||
last_payment_note | text | ||
last_payment_type | text | ||
total_owed | numeric | ||
last_billing_ts | timestamp with time zone | ||
last_billing_note | text | ||
last_billing_type | text | ||
balance_owed | numeric | ||
xact_type | text |
F-Key | Name | Type | Description |
---|---|---|---|
cat_key | integer | ||
call_key | integer | ||
copy | integer | ||
cat1 | text | ||
cat2 | text | ||
createdate | timestamp with time zone | ||
home_location | text | ||
barcode | text | ||
price | integer | ||
item_type | text | ||
owning_library | text | ||
shadow | boolean | ||
callnum | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
metarecord | bigint | ||
fingerprint | text | ||
quality | integer | ||
tcn_source | text | ||
tcn_value | text | ||
title | text | ||
author | text | ||
publisher | text | ||
pubdate | text | ||
isbn | text | ||
issn | text | ||
topic_subject | text | ||
geographic_subject | text | ||
genre | text | ||
name_subject | text | ||
corporate_subject | text |
SELECT r.id , s.metarecord , r.fingerprint , r.quality , r.tcn_source , r.tcn_value , ( SELECT"first" (full_rec.value) AS title FROM metabib.full_rec WHERE ( ( (full_rec.tag = '245'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) AS title , ( SELECT"first" (full_rec.value) AS title FROM metabib.full_rec WHERE ( ( (full_rec.tag = '100'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) AS author , ( SELECT"first" (full_rec.value) AS title FROM metabib.full_rec WHERE ( ( (full_rec.tag = '260'::bpchar) AND (full_rec.subfield = 'b'::text) ) AND (full_rec.record = r.id) ) ) AS publisher , ( SELECT"first" ("substring" (full_rec.value , E'\\d+'::text ) ) AS title FROM metabib.full_rec WHERE ( ( (full_rec.tag = '260'::bpchar) AND (full_rec.subfield = 'c'::text) ) AND (full_rec.record = r.id) ) ) AS pubdate , ( SELECT"first" ("substring" (full_rec.value , E'^\\w+'::text ) ) AS title FROM metabib.full_rec WHERE ( ( ( (full_rec.tag = '020'::bpchar) OR (full_rec.tag = '024'::bpchar) ) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) AS isbn , ( SELECT"first" ("substring" (full_rec.value , E'^\\w+'::text ) ) AS title FROM metabib.full_rec WHERE ( ( (full_rec.tag = '022'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) AS issn , ( ('["'::text || array_to_string (ARRAY ( SELECT"replace" ("replace" (full_rec.value ,'"'::text ,'"'::text ) , E'\\'::text , E'\\\\'::text ) AS "replace" FROM metabib.full_rec WHERE ( ( (full_rec.tag = '650'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) ,'","'::text ) ) || '"]'::text ) AS topic_subject , ( ('["'::text || array_to_string (ARRAY ( SELECT"replace" ("replace" (full_rec.value ,'"'::text ,'"'::text ) , E'\\'::text , E'\\\\'::text ) AS "replace" FROM metabib.full_rec WHERE ( ( (full_rec.tag = '651'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) ,'","'::text ) ) || '"]'::text ) AS geographic_subject , ( ('["'::text || array_to_string (ARRAY ( SELECT"replace" ("replace" (full_rec.value ,'"'::text ,'"'::text ) , E'\\'::text , E'\\\\'::text ) AS "replace" FROM metabib.full_rec WHERE ( ( (full_rec.tag = '655'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) ,'","'::text ) ) || '"]'::text ) AS genre , ( ('["'::text || array_to_string (ARRAY ( SELECT"replace" ("replace" (full_rec.value ,'"'::text ,'"'::text ) , E'\\'::text , E'\\\\'::text ) AS "replace" FROM metabib.full_rec WHERE ( ( (full_rec.tag = '600'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) ,'","'::text ) ) || '"]'::text ) AS name_subject , ( ('["'::text || array_to_string (ARRAY ( SELECT"replace" ("replace" (full_rec.value ,'"'::text ,'"'::text ) , E'\\'::text , E'\\\\'::text ) AS "replace" FROM metabib.full_rec WHERE ( ( (full_rec.tag = '610'::bpchar) AND (full_rec.subfield = 'a'::text) ) AND (full_rec.record = r.id) ) ) ,'","'::text ) ) || '"]'::text ) AS corporate_subject FROM (biblio.record_entry r JOIN metabib.metarecord_source_map s ON ( (s.source = r.id) ) ) GROUP BY r.id , s.metarecord , r.fingerprint , r.quality , r.tcn_source , r.tcn_value;
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | integer | ||
target | integer | ||
current_copy | bigint | ||
hold_type | text | ||
pickup_lib | integer | ||
selection_ou | integer | ||
selection_depth | integer | ||
request_time | date | ||
capture_time | timestamp with time zone | ||
request_lib | integer | ||
requestor | integer | ||
usr | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
vid | bigint | ||
pid | bigint | ||
title | text | ||
item_form | text | ||
item_type | text | ||
bib_level | text | ||
secondary_f | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
callnum | text | ||
catkey | integer | ||
callkey | integer | ||
itemkey | integer | ||
cat1 | text | ||
cat2 | text | ||
createdate | date | ||
home_location | text | ||
barcode | text | ||
price | integer | ||
item_type | text | ||
owning_lib | text | ||
shadow | boolean |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
select prs_name from pg_ts_cfg where oid = show_curcfg()
aggregate_dummy
aggregate_dummy
aggregate_dummy
my $txt = shift; $txt =~ s/^\s+//o; $txt =~ s/[\[\]\{\}\(\)`'"#<>\*\?\-\+\$\\]+//og; $txt =~ s/\s+$//o; if ($txt =~ /(\d{3}(?:\.\d+)?)/o) { return $1; } else { return (split /\s+/, $txt)[0]; }
SELECT SUBSTRING(call_number_dewey($1) FROM 1 FOR $2);
concat
connectby_text
connectby_text
connectby_text_serial
connectby_text_serial
crosstab
crosstab
crosstab_hash
crosstab
crosstab
crosstab
dex_init
dex_lexize
use Unicode::Normalize; my $x = NFC(shift); $x =~ s/([\x{0080}-\x{fffd}])/sprintf('&#x%X;',ord($1))/sgoe; return $x;
boolean operation with text index
exectsq
SELECT ($1)[s] FROM generate_series(1, array_upper($1, 1)) AS s;
SELECT public.extract_marc_field($1,$2,$3,'');
SELECT regexp_replace(array_to_string( array_accum( output ),' ' ),$4,'','g') FROM xpath_table('id', 'marc', $1, $3, 'id='||$2)x(id INT, output TEXT);
aggregate_dummy
SELECT CASE WHEN $1 IS NULL THEN $2 ELSE $1 END;
gbt_bit_compress
gbt_bit_consistent
gbt_bit_penalty
gbt_bit_picksplit
gbt_bit_same
gbt_bit_union
gbt_bpchar_compress
gbt_bpchar_consistent
gbt_bytea_compress
gbt_bytea_consistent
gbt_bytea_penalty
gbt_bytea_picksplit
gbt_bytea_same
gbt_bytea_union
gbt_cash_compress
gbt_cash_consistent
gbt_cash_penalty
gbt_cash_picksplit
gbt_cash_same
gbt_cash_union
gbt_cidr_compress
gbt_cidr_consistent
gbt_date_compress
gbt_date_consistent
gbt_date_penalty
gbt_date_picksplit
gbt_date_same
gbt_date_union
gbt_decompress
gbt_float4_compress
gbt_float4_consistent
gbt_float4_penalty
gbt_float4_picksplit
gbt_float4_same
gbt_float4_union
gbt_float8_compress
gbt_float8_consistent
gbt_float8_penalty
gbt_float8_picksplit
gbt_float8_same
gbt_float8_union
gbt_inet_compress
gbt_inet_consistent
gbt_inet_penalty
gbt_inet_picksplit
gbt_inet_same
gbt_inet_union
gbt_int2_compress
gbt_int2_consistent
gbt_int2_penalty
gbt_int2_picksplit
gbt_int2_same
gbt_int2_union
gbt_int4_compress
gbt_int4_consistent
gbt_int4_penalty
gbt_int4_picksplit
gbt_int4_same
gbt_int4_union
gbt_int8_compress
gbt_int8_consistent
gbt_int8_penalty
gbt_int8_picksplit
gbt_int8_same
gbt_int8_union
gbt_intv_compress
gbt_intv_consistent
gbt_intv_decompress
gbt_intv_penalty
gbt_intv_picksplit
gbt_intv_same
gbt_intv_union
gbt_macad_compress
gbt_macad_consistent
gbt_macad_penalty
gbt_macad_picksplit
gbt_macad_same
gbt_macad_union
gbt_numeric_compress
gbt_numeric_consistent
gbt_numeric_penalty
gbt_numeric_picksplit
gbt_numeric_same
gbt_numeric_union
gbt_oid_compress
gbt_oid_consistent
gbt_oid_penalty
gbt_oid_picksplit
gbt_oid_same
gbt_oid_union
gbt_text_compress
gbt_text_consistent
gbt_text_penalty
gbt_text_picksplit
gbt_text_same
gbt_text_union
gbt_time_compress
gbt_time_consistent
gbt_time_penalty
gbt_time_picksplit
gbt_time_same
gbt_time_union
gbt_timetz_compress
gbt_timetz_consistent
gbt_ts_compress
gbt_ts_consistent
gbt_ts_penalty
gbt_ts_picksplit
gbt_ts_same
gbt_ts_union
gbt_tstz_compress
gbt_tstz_consistent
gbt_var_decompress
gbtreekey_in
gbtreekey_out
gbtreekey_in
gbtreekey_out
gbtreekey_in
gbtreekey_out
gbtreekey_in
gbtreekey_out
gbtreekey_in
gbtreekey_out
get_covers
gtsvector_compress
gtsvector_consistent
gtsvector_decompress
gtsvector_in
gtsvector_out
gtsvector_penalty
gtsvector_picksplit
gtsvector_same
gtsvector_union
headline
headline
headline_current
headline_current
headline_byname
headline_byname
aggregate_dummy
SELECT $2;
tsvector_length
lexize
lexize_bycurrent
lexize_byname
return lc(shift);
SELECT public.naco_normalize($1,'');
use Unicode::Normalize; my $txt = lc(shift); my $sf = shift; $txt = NFD($txt); $txt =~ s/\pM+//go; # Remove diacritics $txt =~ s/\xE6/AE/go; # Convert ae digraph $txt =~ s/\x{153}/OE/go;# Convert oe digraph $txt =~ s/\xFE/TH/go; # Convert Icelandic thorn $txt =~ tr/\x{2070}\x{2071}\x{2072}\x{2073}\x{2074}\x{2075}\x{2076}\x{2077}\x{2078}\x{2079}\x{207A}\x{207B}/0123456789+-/;# Convert superscript numbers $txt =~ tr/\x{2080}\x{2081}\x{2082}\x{2083}\x{2084}\x{2085}\x{2086}\x{2087}\x{2088}\x{2089}\x{208A}\x{208B}/0123456889+-/;# Convert subscript numbers $txt =~ tr/\x{0251}\x{03B1}\x{03B2}\x{0262}\x{03B3}/AABGG/; # Convert Latin and Greek $txt =~ tr/\x{2113}\xF0\!\"\(\)\-\{\}\<\>\;\:\.\?\xA1\xBF\/\\\@\*\%\=\xB1\+\xAE\xA9\x{2117}\$\xA3\x{FFE1}\xB0\^\_\~\`/LD /; # Convert Misc $txt =~ tr/\'\[\]\|//d; # Remove Misc if ($sf && $sf =~ /^a/o) { my $commapos = index($txt,','); if ($commapos > -1) { if ($commapos != length($txt) - 1) { my @list = split /,/, $txt; my $first = shift @list; $txt = $first . ',' . join(' ', @list); } else { $txt =~ s/,/ /go; } } } else { $txt =~ s/,/ /go; } $txt =~ s/\s+/ /go; # Compress multiple spaces $txt =~ s/^\s+//o; # Remove leading space $txt =~ s/\s+$//o; # Remove trailing space return $txt;
SELECT SUBSTRING( REGEXP_REPLACE( REGEXP_REPLACE( $1, '\\W*$', '' ), ' ', ' ' ), CASE WHEN $2::INT NOT BETWEEN 48 AND 57 THEN 1 ELSE $2::TEXT::INT + 1 END );
normal_rand
SELECT regexp_replace(regexp_replace(regexp_replace($1, E'\\n', ' ', 'g'), E'(?:^\\s+)|(\\s+$)', '', 'g'), E'\\s+', ' ', 'g');
SELECT $2;
SELECT $2;
DECLARE locale TEXT := REGEXP_REPLACE( REGEXP_REPLACE( raw_locale, E'[;, ].+$', '' ), E'_', '-', 'g' ); language TEXT := REGEXP_REPLACE( locale, E'-.+$', '' ); result config.i18n_core%ROWTYPE; fallback TEXT; keyfield TEXT := keyclass || '.' || keycol; BEGIN -- Try the full locale SELECT * INTO result FROM config.i18n_core WHERE fq_field = keyfield AND identity_value = keyvalue AND translation = locale; -- Try just the language IF NOT FOUND THEN SELECT * INTO result FROM config.i18n_core WHERE fq_field = keyfield AND identity_value = keyvalue AND translation = language; END IF; -- Fall back to the string we passed in in the first place IF NOT FOUND THEN EXECUTE 'SELECT ' || keycol || ' FROM ' || keytable || ' WHERE ' || identcol || ' = ' || quote_literal(keyvalue) INTO fallback; RETURN fallback; END IF; RETURN result.string; END;
BEGIN NEW.index_vector = to_tsvector(TG_ARGV[0], NEW.value); RETURN NEW; END;
parse
parse_current
parse_byname
prsd_end
prsd_getlexeme
prsd_headline
prsd_lextype
prsd_start
tsquerytree
rank_def
rank_def
rank
rank
rank_cd
rank_cd
rank_cd_def
rank_cd_def
use Unicode::Normalize; my $x = NFD(shift); $x =~ s/\pM+//go; return $x;
reset_tsearch
boolean operation with text index
rexectsq
set_curcfg
set_curcfg_byname
set_curdict
set_curdict_byname
set_curprs
set_curprs_byname
setweight
show_curcfg
snb_en_init
snb_lexize
snb_ru_init
spell_init
spell_lexize
ts_stat
ts_stat
strip
syn_init
syn_lexize
BEGIN RETURN $1::regclass; END;
SELECT CASE WHEN $1 IS NULL THEN $2 WHEN $2 IS NULL THEN $1 ELSE $1 || ' ' || $2 END;
to_tsquery
to_tsquery_current
to_tsquery_name
to_tsvector
to_tsvector_current
to_tsvector_name
token_type_current
token_type
token_type_byname
select m.ts_name, t.alias as tok_type, t.descr as description, p.token, m.dict_name, strip(to_tsvector(p.token)) as tsvector from parse( _get_parser_from_curcfg(), $1 ) as p, token_type() as t, pg_ts_cfgmap as m, pg_ts_cfg as c where t.tokid=p.tokid and t.alias = m.tok_alias and m.ts_name=c.ts_name and c.oid=show_curcfg()
tsearch2
tsquery_in
tsquery_out
tsvector_cmp
SELECT CASE WHEN $1 IS NULL THEN $2 WHEN $2 IS NULL THEN $1 ELSE $1 || ' ' || $2 END;
tsvector_eq
tsvector_ge
tsvector_gt
tsvector_in
tsvector_le
tsvector_lt
tsvector_ne
tsvector_out
return uc(shift);
xml_encode_special_chars
xml_valid
xpath_bool
SELECT xpath_list($1,$2,',')
xpath_list
SELECT xpath_nodeset($1,$2,'','')
SELECT xpath_nodeset($1,$2,'',$3)
xpath_nodeset
xpath_number
xpath_string
xpath_table
xslt_process
xslt_process
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
active | boolean | NOT NULL DEFAULT true | |
config.metabib_field.id | field | integer | NOT NULL |
bump_type | text | NOT NULL | |
multiplier | numeric | NOT NULL DEFAULT 1.0 |
Name | Constraint |
---|---|
relevance_adjustment_bump_type_check | CHECK ((((bump_type = 'word_order'::text) OR (bump_type = 'first_word'::text)) OR (bump_type = 'full_match'::text))) |
SELECT ($1)[s] FROM generate_series(1, array_upper($1, 1)) AS s;
use JSON::XS; my $json = shift; my $args = decode_json( $json ); my $id = 1; for my $k ( keys %$args ) { (my $alias = $k) =~ s/\|/_/gso; my ($class, $field) = split /\|/, $k; my $part = $args->{$k}; for my $p ( keys %$part ) { my $data = $part->{$p}; $data = [$data] if (!ref($data)); for my $datum ( @$data ) { return_next( { field_class => $class, field_name => $field, term => $datum, table_alias => $alias, term_type => $p, id => $id, } ); $id++; } } } return undef;
SELECT CASE WHEN $1 = 'author' THEN 'metabib.author_field_entry' WHEN $1 = 'title' THEN 'metabib.title_field_entry' WHEN $1 = 'subject' THEN 'metabib.subject_field_entry' WHEN $1 = 'keyword' THEN 'metabib.keyword_field_entry' WHEN $1 = 'series' THEN 'metabib.series_field_entry' END;
DECLARE current_res search.search_result%ROWTYPE; query_part search.search_args%ROWTYPE; phrase_query_part search.search_args%ROWTYPE; rank_adjust_id INT; core_rel_limit INT; core_chk_limit INT; core_skip_chk INT; rank_adjust search.relevance_adjustment%ROWTYPE; query_table TEXT; tmp_text TEXT; tmp_int INT; current_rank TEXT; ranks TEXT[] := '{}'; query_table_alias TEXT; from_alias_array TEXT[] := '{}'; used_ranks TEXT[] := '{}'; mb_field INT; mb_field_list INT[]; search_org_list INT[]; select_clause TEXT := 'SELECT'; from_clause TEXT := ' FROM metabib.metarecord_source_map m JOIN metabib.rec_descriptor mrd ON (m.source = mrd.record) '; where_clause TEXT := ' WHERE 1=1 '; mrd_used BOOL := FALSE; sort_desc BOOL := FALSE; core_result RECORD; core_cursor REFCURSOR; core_rel_query TEXT; vis_limit_query TEXT; inner_where_clause TEXT; total_count INT := 0; check_count INT := 0; deleted_count INT := 0; visible_count INT := 0; excluded_count INT := 0; BEGIN core_rel_limit := COALESCE( param_rel_limit, 25000 ); core_chk_limit := COALESCE( param_chk_limit, 1000 ); core_skip_chk := COALESCE( param_skip_chk, 1 ); IF metarecord THEN select_clause := select_clause || ' m.metarecord as id, array_accum(distinct m.source) as records,'; ELSE select_clause := select_clause || ' m.source as id, array_accum(distinct m.source) as records,'; END IF; -- first we need to construct the base query FOR query_part IN SELECT * FROM search.parse_search_args(param_searches) WHERE term_type = 'fts_query' LOOP inner_where_clause := 'index_vector @@ ' || query_part.term; -- RAISE NOTICE 'TSearch Query: %', query_part.term; IF query_part.field_name IS NOT NULL THEN SELECT id INTO mb_field FROM config.metabib_field WHERE field_class = query_part.field_class AND name = query_part.field_name; IF FOUND THEN inner_where_clause := inner_where_clause || ' AND ' || 'field = ' || mb_field; END IF; END IF; -- moving on to the rank ... SELECT * INTO query_part FROM search.parse_search_args(param_searches) WHERE term_type = 'fts_rank' AND table_alias = query_part.table_alias; current_rank := query_part.term || ' * ' || query_part.table_alias || '_weight.weight'; -- RAISE NOTICE 'Current rank: %', current_rank; IF query_part.field_name IS NOT NULL THEN SELECT array_accum(distinct id) INTO mb_field_list FROM config.metabib_field WHERE field_class = query_part.field_class AND name = query_part.field_name; ELSE SELECT array_accum(distinct id) INTO mb_field_list FROM config.metabib_field WHERE field_class = query_part.field_class; END IF; FOR rank_adjust IN SELECT * FROM search.relevance_adjustment WHERE active AND field IN ( SELECT * FROM search.explode_array( mb_field_list ) ) LOOP IF NOT rank_adjust.bump_type = ANY (used_ranks) THEN IF rank_adjust.bump_type = 'first_word' THEN SELECT term INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word' ORDER BY id LIMIT 1; tmp_text := query_part.table_alias || '.value ILIKE ' || quote_literal( tmp_text || '%' ); ELSIF rank_adjust.bump_type = 'word_order' THEN SELECT array_to_string( array_accum( term ), '%' ) INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word'; tmp_text := query_part.table_alias || '.value ILIKE ' || quote_literal( '%' || tmp_text || '%' ); ELSIF rank_adjust.bump_type = 'full_match' THEN SELECT array_to_string( array_accum( term ), E'\\s+' ) INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word'; tmp_text := query_part.table_alias || '.value ~ ' || quote_literal( '^' || tmp_text || E'\\W*$' ); END IF; IF tmp_text IS NOT NULL THEN current_rank := current_rank || ' * ( CASE WHEN ' || tmp_text || ' THEN ' || rank_adjust.multiplier || '::REAL ELSE 1.0 END )'; END IF; -- RAISE NOTICE 'Current Weighted Rank: %', current_rank; used_ranks := array_append( used_ranks, rank_adjust.bump_type ); END IF; END LOOP; ranks := array_append( ranks, current_rank ); used_ranks := '{}'; FOR phrase_query_part IN SELECT * FROM search.parse_search_args(param_searches) WHERE term_type = 'phrase' AND table_alias = query_part.table_alias LOOP tmp_text := replace( phrase_query_part.term, '*', E'\\*' ); tmp_text := replace( tmp_text, '?', E'\\?' ); tmp_text := replace( tmp_text, '+', E'\\+' ); tmp_text := replace( tmp_text, '|', E'\\|' ); tmp_text := replace( tmp_text, '(', E'\\(' ); tmp_text := replace( tmp_text, ')', E'\\)' ); tmp_text := replace( tmp_text, '[', E'\\[' ); tmp_text := replace( tmp_text, ']', E'\\]' ); inner_where_clause := inner_where_clause || ' AND ' || 'value ~* ' || quote_literal( E'(^|\\W+)' || regexp_replace(tmp_text, E'\\s+',E'\\\\s+','g') || E'(\\W+|\$)' ); END LOOP; query_table := search.pick_table(query_part.field_class); from_clause := from_clause || ' JOIN ( SELECT * FROM ' || query_table || ' WHERE ' || inner_where_clause || CASE WHEN core_rel_limit > 0 THEN ' LIMIT ' || core_rel_limit::TEXT ELSE '' END || ' ) AS ' || query_part.table_alias || ' ON ( m.source = ' || query_part.table_alias || '.source )' || ' JOIN config.metabib_field AS ' || query_part.table_alias || '_weight' || ' ON ( ' || query_part.table_alias || '.field = ' || query_part.table_alias || '_weight.id AND ' || query_part.table_alias || '_weight.search_field)'; -- RAISE NOTICE 'FROM clause: %', from_clause; from_alias_array := array_append(from_alias_array, query_part.table_alias); END LOOP; IF param_pref_lang IS NOT NULL AND param_pref_lang_multiplier IS NOT NULL THEN current_rank := ' CASE WHEN mrd.item_lang = ' || quote_literal( param_pref_lang ) || ' THEN ' || param_pref_lang_multiplier || '::REAL ELSE 1.0 END '; --ranks := array_append( ranks, current_rank ); END IF; current_rank := ' AVG( ( (' || array_to_string( ranks, ') + (' ) || ') ) * ' || current_rank || ' ) '; -- RAISE NOTICE 'Ranks: %', current_rank; select_clause := select_clause || current_rank || ' AS rel,'; -- RAISE NOTICE 'SELECT clause: %', select_clause; sort_desc = param_sort_desc; IF param_sort = 'pubdate' THEN tmp_text := '999999'; IF param_sort_desc THEN tmp_text := '0'; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT SUBSTRING(frp.value FROM E'\\d{4}') FROM metabib.full_rec frp WHERE frp.record = m.source AND frp.tag = '260' AND frp.subfield = 'c' LIMIT 1 )), $$ || quote_literal(tmp_text) || $$ )::INT ) $$; ELSIF param_sort = 'title' THEN tmp_text := 'zzzzzz'; IF param_sort_desc THEN tmp_text := ' '; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT LTRIM(SUBSTR( frt.value, COALESCE(SUBSTRING(frt.ind2 FROM E'\\d+'),'0')::INT + 1 )) FROM metabib.full_rec frt WHERE frt.record = m.source AND frt.tag = '245' AND frt.subfield = 'a' LIMIT 1 )),$$ || quote_literal(tmp_text) || $$)) $$; ELSIF param_sort = 'author' THEN tmp_text := 'zzzzzz'; IF param_sort_desc THEN tmp_text := ' '; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT LTRIM(fra.value) FROM metabib.full_rec fra WHERE fra.record = m.source AND fra.tag LIKE '1%' AND fra.subfield = 'a' ORDER BY fra.tag::text::int LIMIT 1 )),$$ || quote_literal(tmp_text) || $$)) $$; ELSIF param_sort = 'create_date' THEN current_rank := $$( FIRST (( SELECT create_date FROM biblio.record_entry rbr WHERE rbr.id = m.source)) )$$; ELSIF param_sort = 'edit_date' THEN current_rank := $$( FIRST (( SELECT edit_date FROM biblio.record_entry rbr WHERE rbr.id = m.source)) )$$; ELSE sort_desc := NOT COALESCE(param_sort_desc, FALSE); END IF; select_clause := select_clause || current_rank || ' AS rank'; -- now add the other qualifiers IF param_audience IS NOT NULL AND array_upper(param_audience, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.audience IN ('$$ || array_to_string(param_audience, $$','$$) || $$') $$; END IF; IF param_language IS NOT NULL AND array_upper(param_language, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_lang IN ('$$ || array_to_string(param_language, $$','$$) || $$') $$; END IF; IF param_lit_form IS NOT NULL AND array_upper(param_lit_form, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.lit_form IN ('$$ || array_to_string(param_lit_form, $$','$$) || $$') $$; END IF; IF param_types IS NOT NULL AND array_upper(param_types, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_type IN ('$$ || array_to_string(param_types, $$','$$) || $$') $$; END IF; IF param_forms IS NOT NULL AND array_upper(param_forms, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_form IN ('$$ || array_to_string(param_forms, $$','$$) || $$') $$; END IF; IF param_vformats IS NOT NULL AND array_upper(param_vformats, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.vr_format IN ('$$ || array_to_string(param_vformats, $$','$$) || $$') $$; END IF; core_rel_query := select_clause || from_clause || where_clause || ' GROUP BY 1 ORDER BY 4' || CASE WHEN sort_desc THEN ' DESC' ELSE ' ASC' END || ';'; -- RAISE NOTICE 'Base Query: %', core_rel_query; IF param_depth IS NOT NULL THEN SELECT array_accum(distinct id) INTO search_org_list FROM actor.org_unit_descendants( param_search_ou, param_depth ); ELSE SELECT array_accum(distinct id) INTO search_org_list FROM actor.org_unit_descendants( param_search_ou ); END IF; OPEN core_cursor FOR EXECUTE core_rel_query; LOOP FETCH core_cursor INTO core_result; EXIT WHEN NOT FOUND; IF total_count % 1000 = 0 THEN -- RAISE NOTICE ' % total, % checked so far ... ', total_count, check_count; END IF; IF core_chk_limit > 0 AND total_count - core_skip_chk + 1 >= core_chk_limit THEN total_count := total_count + 1; CONTINUE; END IF; total_count := total_count + 1; CONTINUE WHEN param_skip_chk IS NOT NULL and total_count < param_skip_chk; check_count := check_count + 1; PERFORM 1 FROM biblio.record_entry b WHERE NOT b.deleted AND b.id IN ( SELECT * FROM search.explode_array( core_result.records ) ); IF NOT FOUND THEN -- RAISE NOTICE ' % were all deleted ... ', core_result.records; deleted_count := deleted_count + 1; CONTINUE; END IF; PERFORM 1 FROM biblio.record_entry b JOIN config.bib_source s ON (b.source = s.id) WHERE s.transcendant AND b.id IN ( SELECT * FROM search.explode_array( core_result.records ) ); IF FOUND THEN -- RAISE NOTICE ' % were all transcendant ... ', core_result.records; visible_count := visible_count + 1; current_res.id = core_result.id; current_res.rel = core_result.rel; tmp_int := 1; IF metarecord THEN SELECT COUNT(DISTINCT s.source) INTO tmp_int FROM metabib.metarecord_source_map s WHERE s.metarecord = core_result.id; END IF; IF tmp_int = 1 THEN current_res.record = core_result.records[1]; ELSE current_res.record = NULL; END IF; RETURN NEXT current_res; CONTINUE; END IF; IF param_statuses IS NOT NULL AND array_upper(param_statuses, 1) > 0 THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.status IN ( SELECT * FROM search.explode_array( param_statuses ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all status-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; IF staff IS NULL OR NOT staff THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) JOIN actor.org_unit a ON (cp.circ_lib = a.id) JOIN asset.copy_location cl ON (cp.location = cl.id) JOIN config.copy_status cs ON (cp.status = cs.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cs.holdable AND cl.opac_visible AND cp.opac_visible AND a.opac_visible AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all visibility-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; ELSE PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) JOIN actor.org_unit a ON (cp.circ_lib = a.id) JOIN asset.copy_location cl ON (cp.location = cl.id) JOIN config.copy_status cs ON (cp.status = cs.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF NOT FOUND THEN PERFORM 1 FROM asset.call_number cn WHERE cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF FOUND THEN -- RAISE NOTICE ' % were all visibility-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; END IF; visible_count := visible_count + 1; current_res.id = core_result.id; current_res.rel = core_result.rel; tmp_int := 1; IF metarecord THEN SELECT COUNT(DISTINCT s.source) INTO tmp_int FROM metabib.metarecord_source_map s WHERE s.metarecord = core_result.id; END IF; IF tmp_int = 1 THEN current_res.record = core_result.records[1]; ELSE current_res.record = NULL; END IF; RETURN NEXT current_res; IF visible_count % 1000 = 0 THEN -- RAISE NOTICE ' % visible so far ... ', visible_count; END IF; END LOOP; current_res.id = NULL; current_res.rel = NULL; current_res.record = NULL; current_res.total = total_count; current_res.checked = check_count; current_res.deleted = deleted_count; current_res.visible = visible_count; current_res.excluded = excluded_count; CLOSE core_cursor; RETURN NEXT current_res; END;
DECLARE current_res search.search_result%ROWTYPE; query_part search.search_args%ROWTYPE; phrase_query_part search.search_args%ROWTYPE; rank_adjust_id INT; core_rel_limit INT; core_chk_limit INT; core_skip_chk INT; rank_adjust search.relevance_adjustment%ROWTYPE; query_table TEXT; tmp_text TEXT; tmp_int INT; current_rank TEXT; ranks TEXT[] := '{}'; query_table_alias TEXT; from_alias_array TEXT[] := '{}'; used_ranks TEXT[] := '{}'; mb_field INT; mb_field_list INT[]; search_org_list INT[]; select_clause TEXT := 'SELECT'; from_clause TEXT := ' FROM metabib.metarecord_source_map m JOIN metabib.rec_descriptor mrd ON (m.source = mrd.record) '; where_clause TEXT := ' WHERE 1=1 '; mrd_used BOOL := FALSE; sort_desc BOOL := FALSE; core_result RECORD; core_cursor REFCURSOR; core_rel_query TEXT; vis_limit_query TEXT; inner_where_clause TEXT; total_count INT := 0; check_count INT := 0; deleted_count INT := 0; visible_count INT := 0; excluded_count INT := 0; BEGIN core_rel_limit := COALESCE( param_rel_limit, 25000 ); core_chk_limit := COALESCE( param_chk_limit, 1000 ); core_skip_chk := COALESCE( param_skip_chk, 1 ); IF metarecord THEN select_clause := select_clause || ' m.metarecord as id, array_accum(distinct m.source) as records,'; ELSE select_clause := select_clause || ' m.source as id, array_accum(distinct m.source) as records,'; END IF; -- first we need to construct the base query FOR query_part IN SELECT * FROM search.parse_search_args(param_searches) WHERE term_type = 'fts_query' LOOP inner_where_clause := 'index_vector @@ ' || query_part.term; IF query_part.field_name IS NOT NULL THEN SELECT id INTO mb_field FROM config.metabib_field WHERE field_class = query_part.field_class AND name = query_part.field_name; IF FOUND THEN inner_where_clause := inner_where_clause || ' AND ' || 'field = ' || mb_field; END IF; END IF; -- moving on to the rank ... SELECT * INTO query_part FROM search.parse_search_args(param_searches) WHERE term_type = 'fts_rank' AND table_alias = query_part.table_alias; current_rank := query_part.term || ' * ' || query_part.table_alias || '_weight.weight'; IF query_part.field_name IS NOT NULL THEN SELECT array_accum(distinct id) INTO mb_field_list FROM config.metabib_field WHERE field_class = query_part.field_class AND name = query_part.field_name; ELSE SELECT array_accum(distinct id) INTO mb_field_list FROM config.metabib_field WHERE field_class = query_part.field_class; END IF; FOR rank_adjust IN SELECT * FROM search.relevance_adjustment WHERE active AND field IN ( SELECT * FROM search.explode_array( mb_field_list ) ) LOOP IF NOT rank_adjust.bump_type = ANY (used_ranks) THEN IF rank_adjust.bump_type = 'first_word' THEN SELECT term INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word' ORDER BY id LIMIT 1; tmp_text := query_part.table_alias || '.value ILIKE ' || quote_literal( tmp_text || '%' ); ELSIF rank_adjust.bump_type = 'word_order' THEN SELECT array_to_string( array_accum( term ), '%' ) INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word'; tmp_text := query_part.table_alias || '.value ILIKE ' || quote_literal( '%' || tmp_text || '%' ); ELSIF rank_adjust.bump_type = 'full_match' THEN SELECT array_to_string( array_accum( term ), E'\\s+' ) INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word'; tmp_text := query_part.table_alias || '.value ~ ' || quote_literal( '^' || tmp_text || E'\\W*$' ); END IF; IF tmp_text IS NOT NULL THEN current_rank := current_rank || ' * ( CASE WHEN ' || tmp_text || ' THEN ' || rank_adjust.multiplier || '::REAL ELSE 1.0 END )'; END IF; used_ranks := array_append( used_ranks, rank_adjust.bump_type ); END IF; END LOOP; ranks := array_append( ranks, current_rank ); used_ranks := '{}'; FOR phrase_query_part IN SELECT * FROM search.parse_search_args(param_searches) WHERE term_type = 'phrase' AND table_alias = query_part.table_alias LOOP tmp_text := replace( phrase_query_part.term, '*', E'\\*' ); tmp_text := replace( tmp_text, '?', E'\\?' ); tmp_text := replace( tmp_text, '+', E'\\+' ); tmp_text := replace( tmp_text, '|', E'\\|' ); tmp_text := replace( tmp_text, '(', E'\\(' ); tmp_text := replace( tmp_text, ')', E'\\)' ); tmp_text := replace( tmp_text, '[', E'\\[' ); tmp_text := replace( tmp_text, ']', E'\\]' ); inner_where_clause := inner_where_clause || ' AND ' || 'value ~* ' || quote_literal( E'(^|\\W+)' || regexp_replace(tmp_text, E'\\s+',E'\\\\s+','g') || E'(\\W+|\$)' ); END LOOP; query_table := search.pick_table(query_part.field_class); from_clause := from_clause || ' JOIN ( SELECT * FROM ' || query_table || ' WHERE ' || inner_where_clause || CASE WHEN core_rel_limit > 0 THEN ' LIMIT ' || core_rel_limit::TEXT ELSE '' END || ' ) AS ' || query_part.table_alias || ' ON ( m.source = ' || query_part.table_alias || '.source )' || ' JOIN config.metabib_field AS ' || query_part.table_alias || '_weight' || ' ON ( ' || query_part.table_alias || '.field = ' || query_part.table_alias || '_weight.id AND ' || query_part.table_alias || '_weight.search_field)'; from_alias_array := array_append(from_alias_array, query_part.table_alias); END LOOP; IF param_pref_lang IS NOT NULL AND param_pref_lang_multiplier IS NOT NULL THEN current_rank := ' CASE WHEN mrd.item_lang = ' || quote_literal( param_pref_lang ) || ' THEN ' || param_pref_lang_multiplier || '::REAL ELSE 1.0 END '; -- ranks := array_append( ranks, current_rank ); END IF; current_rank := ' AVG( ( (' || array_to_string( ranks, ') + (' ) || ') ) * ' || current_rank || ' ) '; select_clause := select_clause || current_rank || ' AS rel,'; sort_desc = param_sort_desc; IF param_sort = 'pubdate' THEN tmp_text := '999999'; IF param_sort_desc THEN tmp_text := '0'; END IF; current_rank := $$ COALESCE( FIRST(NULLIF(REGEXP_REPLACE(mrd.date1, E'\\D+', '9', 'g'),'')), $$ || quote_literal(tmp_text) || $$ )::INT $$; ELSIF param_sort = 'title' THEN tmp_text := 'zzzzzz'; IF param_sort_desc THEN tmp_text := ' '; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT LTRIM(SUBSTR( frt.value, COALESCE(SUBSTRING(frt.ind2 FROM E'\\d+'),'0')::INT + 1 )) FROM metabib.full_rec frt WHERE frt.record = m.source AND frt.tag = '245' AND frt.subfield = 'a' LIMIT 1 )),$$ || quote_literal(tmp_text) || $$)) $$; ELSIF param_sort = 'author' THEN tmp_text := 'zzzzzz'; IF param_sort_desc THEN tmp_text := ' '; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT LTRIM(fra.value) FROM metabib.full_rec fra WHERE fra.record = m.source AND fra.tag LIKE '1%' AND fra.subfield = 'a' ORDER BY fra.tag::text::int LIMIT 1 )),$$ || quote_literal(tmp_text) || $$)) $$; ELSIF param_sort = 'create_date' THEN current_rank := $$( FIRST (( SELECT create_date FROM biblio.record_entry rbr WHERE rbr.id = m.source)) )$$; ELSIF param_sort = 'edit_date' THEN current_rank := $$( FIRST (( SELECT edit_date FROM biblio.record_entry rbr WHERE rbr.id = m.source)) )$$; ELSE sort_desc := NOT COALESCE(param_sort_desc, FALSE); END IF; select_clause := select_clause || current_rank || ' AS rank'; -- now add the other qualifiers IF param_audience IS NOT NULL AND array_upper(param_audience, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.audience IN ('$$ || array_to_string(param_audience, $$','$$) || $$') $$; END IF; IF param_language IS NOT NULL AND array_upper(param_language, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_lang IN ('$$ || array_to_string(param_language, $$','$$) || $$') $$; END IF; IF param_lit_form IS NOT NULL AND array_upper(param_lit_form, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.lit_form IN ('$$ || array_to_string(param_lit_form, $$','$$) || $$') $$; END IF; IF param_types IS NOT NULL AND array_upper(param_types, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_type IN ('$$ || array_to_string(param_types, $$','$$) || $$') $$; END IF; IF param_forms IS NOT NULL AND array_upper(param_forms, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_form IN ('$$ || array_to_string(param_forms, $$','$$) || $$') $$; END IF; IF param_vformats IS NOT NULL AND array_upper(param_vformats, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.vr_format IN ('$$ || array_to_string(param_vformats, $$','$$) || $$') $$; END IF; IF param_bib_level IS NOT NULL AND array_upper(param_bib_level, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.bib_level IN ('$$ || array_to_string(param_bib_level, $$','$$) || $$') $$; END IF; IF param_before IS NOT NULL AND param_before <> '' THEN where_clause = where_clause || $$ AND mrd.date1 <= $$ || quote_literal(param_before) || ' '; END IF; IF param_after IS NOT NULL AND param_after <> '' THEN where_clause = where_clause || $$ AND mrd.date1 >= $$ || quote_literal(param_after) || ' '; END IF; IF param_during IS NOT NULL AND param_during <> '' THEN where_clause = where_clause || $$ AND $$ || quote_literal(param_during) || $$ BETWEEN mrd.date1 AND mrd.date2 $$; END IF; IF param_between IS NOT NULL AND array_upper(param_between, 1) > 1 THEN where_clause = where_clause || $$ AND mrd.date1 BETWEEN '$$ || array_to_string(param_between, $$' AND '$$) || $$' $$; END IF; core_rel_query := select_clause || from_clause || where_clause || ' GROUP BY 1 ORDER BY 4' || CASE WHEN sort_desc THEN ' DESC' ELSE ' ASC' END || ';'; --RAISE NOTICE 'Base Query: %', core_rel_query; IF param_search_ou > 0 THEN IF param_depth IS NOT NULL THEN SELECT array_accum(distinct id) INTO search_org_list FROM actor.org_unit_descendants( param_search_ou, param_depth ); ELSE SELECT array_accum(distinct id) INTO search_org_list FROM actor.org_unit_descendants( param_search_ou ); END IF; ELSIF param_search_ou < 0 THEN SELECT array_accum(distinct org_unit) INTO search_org_list FROM actor.org_lasso_map WHERE lasso = -param_search_ou; ELSIF param_search_ou = 0 THEN -- reserved for user lassos (ou_buckets/type='lasso') with ID passed in depth ... hack? sure. END IF; OPEN core_cursor FOR EXECUTE core_rel_query; LOOP FETCH core_cursor INTO core_result; EXIT WHEN NOT FOUND; IF total_count % 1000 = 0 THEN -- RAISE NOTICE ' % total, % checked so far ... ', total_count, check_count; END IF; IF core_chk_limit > 0 AND total_count - core_skip_chk + 1 >= core_chk_limit THEN total_count := total_count + 1; CONTINUE; END IF; total_count := total_count + 1; CONTINUE WHEN param_skip_chk IS NOT NULL and total_count < param_skip_chk; check_count := check_count + 1; PERFORM 1 FROM biblio.record_entry b WHERE NOT b.deleted AND b.id IN ( SELECT * FROM search.explode_array( core_result.records ) ); IF NOT FOUND THEN -- RAISE NOTICE ' % were all deleted ... ', core_result.records; deleted_count := deleted_count + 1; CONTINUE; END IF; PERFORM 1 FROM biblio.record_entry b JOIN config.bib_source s ON (b.source = s.id) WHERE s.transcendant AND b.id IN ( SELECT * FROM search.explode_array( core_result.records ) ); IF FOUND THEN -- RAISE NOTICE ' % were all transcendant ... ', core_result.records; visible_count := visible_count + 1; current_res.id = core_result.id; current_res.rel = core_result.rel; tmp_int := 1; IF metarecord THEN SELECT COUNT(DISTINCT s.source) INTO tmp_int FROM metabib.metarecord_source_map s WHERE s.metarecord = core_result.id; END IF; IF tmp_int = 1 THEN current_res.record = core_result.records[1]; ELSE current_res.record = NULL; END IF; RETURN NEXT current_res; CONTINUE; END IF; IF param_statuses IS NOT NULL AND array_upper(param_statuses, 1) > 0 THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.status IN ( SELECT * FROM search.explode_array( param_statuses ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all status-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; IF param_locations IS NOT NULL AND array_upper(param_locations, 1) > 0 THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.location IN ( SELECT * FROM search.explode_array( param_locations ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all copy_location-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; IF staff IS NULL OR NOT staff THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) JOIN actor.org_unit a ON (cp.circ_lib = a.id) JOIN asset.copy_location cl ON (cp.location = cl.id) JOIN config.copy_status cs ON (cp.status = cs.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cs.opac_visible AND cl.opac_visible AND cp.opac_visible AND a.opac_visible AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all visibility-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; ELSE PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) JOIN actor.org_unit a ON (cp.circ_lib = a.id) JOIN asset.copy_location cl ON (cp.location = cl.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF NOT FOUND THEN PERFORM 1 FROM asset.call_number cn WHERE cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF FOUND THEN -- RAISE NOTICE ' % were all visibility-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; END IF; visible_count := visible_count + 1; current_res.id = core_result.id; current_res.rel = core_result.rel; tmp_int := 1; IF metarecord THEN SELECT COUNT(DISTINCT s.source) INTO tmp_int FROM metabib.metarecord_source_map s WHERE s.metarecord = core_result.id; END IF; IF tmp_int = 1 THEN current_res.record = core_result.records[1]; ELSE current_res.record = NULL; END IF; RETURN NEXT current_res; IF visible_count % 1000 = 0 THEN -- RAISE NOTICE ' % visible so far ... ', visible_count; END IF; END LOOP; current_res.id = NULL; current_res.rel = NULL; current_res.record = NULL; current_res.total = total_count; current_res.checked = check_count; current_res.deleted = deleted_count; current_res.visible = visible_count; current_res.excluded = excluded_count; CLOSE core_cursor; RETURN NEXT current_res; END;
DECLARE current_res search.search_result%ROWTYPE; query_part search.search_args%ROWTYPE; phrase_query_part search.search_args%ROWTYPE; rank_adjust_id INT; core_rel_limit INT; core_chk_limit INT; core_skip_chk INT; rank_adjust search.relevance_adjustment%ROWTYPE; query_table TEXT; tmp_text TEXT; tmp_int INT; current_rank TEXT; ranks TEXT[] := '{}'; query_table_alias TEXT; from_alias_array TEXT[] := '{}'; used_ranks TEXT[] := '{}'; mb_field INT; mb_field_list INT[]; search_org_list INT[]; select_clause TEXT := 'SELECT'; from_clause TEXT := ' FROM metabib.metarecord_source_map m JOIN metabib.rec_descriptor mrd ON (m.source = mrd.record) '; where_clause TEXT := ' WHERE 1=1 '; mrd_used BOOL := FALSE; sort_desc BOOL := FALSE; core_result RECORD; core_cursor REFCURSOR; core_rel_query TEXT; vis_limit_query TEXT; inner_where_clause TEXT; total_count INT := 0; check_count INT := 0; deleted_count INT := 0; visible_count INT := 0; excluded_count INT := 0; BEGIN core_rel_limit := COALESCE( param_rel_limit, 25000 ); core_chk_limit := COALESCE( param_chk_limit, 1000 ); core_skip_chk := COALESCE( param_skip_chk, 1 ); IF metarecord THEN select_clause := select_clause || ' m.metarecord as id, array_accum(distinct m.source) as records,'; ELSE select_clause := select_clause || ' m.source as id, array_accum(distinct m.source) as records,'; END IF; -- first we need to construct the base query FOR query_part IN SELECT * FROM search.parse_search_args(param_searches) WHERE term_type = 'fts_query' LOOP inner_where_clause := 'index_vector @@ ' || query_part.term; IF query_part.field_name IS NOT NULL THEN SELECT id INTO mb_field FROM config.metabib_field WHERE field_class = query_part.field_class AND name = query_part.field_name; IF FOUND THEN inner_where_clause := inner_where_clause || ' AND ' || 'field = ' || mb_field; END IF; END IF; -- moving on to the rank ... SELECT * INTO query_part FROM search.parse_search_args(param_searches) WHERE term_type = 'fts_rank' AND table_alias = query_part.table_alias; current_rank := query_part.term || ' * ' || query_part.table_alias || '_weight.weight'; IF query_part.field_name IS NOT NULL THEN SELECT array_accum(distinct id) INTO mb_field_list FROM config.metabib_field WHERE field_class = query_part.field_class AND name = query_part.field_name; ELSE SELECT array_accum(distinct id) INTO mb_field_list FROM config.metabib_field WHERE field_class = query_part.field_class; END IF; FOR rank_adjust IN SELECT * FROM search.relevance_adjustment WHERE active AND field IN ( SELECT * FROM search.explode_array( mb_field_list ) ) LOOP IF NOT rank_adjust.bump_type = ANY (used_ranks) THEN IF rank_adjust.bump_type = 'first_word' THEN SELECT term INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word' ORDER BY id LIMIT 1; tmp_text := query_part.table_alias || '.value ILIKE ' || quote_literal( tmp_text || '%' ); ELSIF rank_adjust.bump_type = 'word_order' THEN SELECT array_to_string( array_accum( term ), '%' ) INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word'; tmp_text := query_part.table_alias || '.value ILIKE ' || quote_literal( '%' || tmp_text || '%' ); ELSIF rank_adjust.bump_type = 'full_match' THEN SELECT array_to_string( array_accum( term ), E'\\s+' ) INTO tmp_text FROM search.parse_search_args(param_searches) WHERE table_alias = query_part.table_alias AND term_type = 'word'; tmp_text := query_part.table_alias || '.value ~ ' || quote_literal( '^' || tmp_text || E'\\W*$' ); END IF; IF tmp_text IS NOT NULL THEN current_rank := current_rank || ' * ( CASE WHEN ' || tmp_text || ' THEN ' || rank_adjust.multiplier || '::REAL ELSE 1.0 END )'; END IF; used_ranks := array_append( used_ranks, rank_adjust.bump_type ); END IF; END LOOP; ranks := array_append( ranks, current_rank ); used_ranks := '{}'; FOR phrase_query_part IN SELECT * FROM search.parse_search_args(param_searches) WHERE term_type = 'phrase' AND table_alias = query_part.table_alias LOOP tmp_text := replace( phrase_query_part.term, '*', E'\\*' ); tmp_text := replace( tmp_text, '?', E'\\?' ); tmp_text := replace( tmp_text, '+', E'\\+' ); tmp_text := replace( tmp_text, '|', E'\\|' ); tmp_text := replace( tmp_text, '(', E'\\(' ); tmp_text := replace( tmp_text, ')', E'\\)' ); tmp_text := replace( tmp_text, '[', E'\\[' ); tmp_text := replace( tmp_text, ']', E'\\]' ); inner_where_clause := inner_where_clause || ' AND ' || 'value ~* ' || quote_literal( E'(^|\\W+)' || regexp_replace(tmp_text, E'\\s+',E'\\\\s+','g') || E'(\\W+|\$)' ); END LOOP; query_table := search.pick_table(query_part.field_class); from_clause := from_clause || ' JOIN ( SELECT * FROM ' || query_table || ' WHERE ' || inner_where_clause || CASE WHEN core_rel_limit > 0 THEN ' LIMIT ' || core_rel_limit::TEXT ELSE '' END || ' ) AS ' || query_part.table_alias || ' ON ( m.source = ' || query_part.table_alias || '.source )' || ' JOIN config.metabib_field AS ' || query_part.table_alias || '_weight' || ' ON ( ' || query_part.table_alias || '.field = ' || query_part.table_alias || '_weight.id AND ' || query_part.table_alias || '_weight.search_field)'; from_alias_array := array_append(from_alias_array, query_part.table_alias); END LOOP; IF param_pref_lang IS NOT NULL AND param_pref_lang_multiplier IS NOT NULL THEN current_rank := ' CASE WHEN mrd.item_lang = ' || quote_literal( param_pref_lang ) || ' THEN ' || param_pref_lang_multiplier || '::REAL ELSE 1.0 END '; --ranks := array_append( ranks, current_rank ); END IF; current_rank := ' AVG( ( (' || array_to_string( ranks, ') + (' ) || ') ) * ' || current_rank || ' ) '; select_clause := select_clause || current_rank || ' AS rel,'; sort_desc = param_sort_desc; IF param_sort = 'pubdate' THEN tmp_text := '999999'; IF param_sort_desc THEN tmp_text := '0'; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT SUBSTRING(frp.value FROM E'\\d{4}') FROM metabib.full_rec frp WHERE frp.record = m.source AND frp.tag = '260' AND frp.subfield = 'c' LIMIT 1 )), $$ || quote_literal(tmp_text) || $$ )::INT ) $$; ELSIF param_sort = 'title' THEN tmp_text := 'zzzzzz'; IF param_sort_desc THEN tmp_text := ' '; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT LTRIM(SUBSTR( frt.value, COALESCE(SUBSTRING(frt.ind2 FROM E'\\d+'),'0')::INT + 1 )) FROM metabib.full_rec frt WHERE frt.record = m.source AND frt.tag = '245' AND frt.subfield = 'a' LIMIT 1 )),$$ || quote_literal(tmp_text) || $$)) $$; ELSIF param_sort = 'author' THEN tmp_text := 'zzzzzz'; IF param_sort_desc THEN tmp_text := ' '; END IF; current_rank := $$ ( COALESCE( FIRST (( SELECT LTRIM(fra.value) FROM metabib.full_rec fra WHERE fra.record = m.source AND fra.tag LIKE '1%' AND fra.subfield = 'a' ORDER BY fra.tag::text::int LIMIT 1 )),$$ || quote_literal(tmp_text) || $$)) $$; ELSIF param_sort = 'create_date' THEN current_rank := $$( FIRST (( SELECT create_date FROM biblio.record_entry rbr WHERE rbr.id = m.source)) )$$; ELSIF param_sort = 'edit_date' THEN current_rank := $$( FIRST (( SELECT edit_date FROM biblio.record_entry rbr WHERE rbr.id = m.source)) )$$; ELSE sort_desc := NOT COALESCE(param_sort_desc, FALSE); END IF; select_clause := select_clause || current_rank || ' AS rank'; -- now add the other qualifiers IF param_audience IS NOT NULL AND array_upper(param_audience, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.audience IN ('$$ || array_to_string(param_audience, $$','$$) || $$') $$; END IF; IF param_language IS NOT NULL AND array_upper(param_language, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_lang IN ('$$ || array_to_string(param_language, $$','$$) || $$') $$; END IF; IF param_lit_form IS NOT NULL AND array_upper(param_lit_form, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.lit_form IN ('$$ || array_to_string(param_lit_form, $$','$$) || $$') $$; END IF; IF param_types IS NOT NULL AND array_upper(param_types, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_type IN ('$$ || array_to_string(param_types, $$','$$) || $$') $$; END IF; IF param_forms IS NOT NULL AND array_upper(param_forms, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.item_form IN ('$$ || array_to_string(param_forms, $$','$$) || $$') $$; END IF; IF param_vformats IS NOT NULL AND array_upper(param_vformats, 1) > 0 THEN where_clause = where_clause || $$ AND mrd.vr_format IN ('$$ || array_to_string(param_vformats, $$','$$) || $$') $$; END IF; core_rel_query := select_clause || from_clause || where_clause || ' GROUP BY 1 ORDER BY 4' || CASE WHEN sort_desc THEN ' DESC' ELSE ' ASC' END || ';'; --RAISE NOTICE 'Base Query: %', core_rel_query; IF param_depth IS NOT NULL THEN SELECT array_accum(distinct id) INTO search_org_list FROM actor.org_unit_descendants( param_search_ou, param_depth ); ELSE SELECT array_accum(distinct id) INTO search_org_list FROM actor.org_unit_descendants( param_search_ou ); END IF; OPEN core_cursor FOR EXECUTE core_rel_query; LOOP FETCH core_cursor INTO core_result; EXIT WHEN NOT FOUND; IF total_count % 1000 = 0 THEN -- RAISE NOTICE ' % total, % checked so far ... ', total_count, check_count; END IF; IF core_chk_limit > 0 AND total_count - core_skip_chk + 1 >= core_chk_limit THEN total_count := total_count + 1; CONTINUE; END IF; total_count := total_count + 1; CONTINUE WHEN param_skip_chk IS NOT NULL and total_count < param_skip_chk; check_count := check_count + 1; PERFORM 1 FROM biblio.record_entry b WHERE NOT b.deleted AND b.id IN ( SELECT * FROM search.explode_array( core_result.records ) ); IF NOT FOUND THEN -- RAISE NOTICE ' % were all deleted ... ', core_result.records; deleted_count := deleted_count + 1; CONTINUE; END IF; PERFORM 1 FROM biblio.record_entry b JOIN config.bib_source s ON (b.source = s.id) WHERE s.transcendant AND b.id IN ( SELECT * FROM search.explode_array( core_result.records ) ); IF FOUND THEN -- RAISE NOTICE ' % were all transcendant ... ', core_result.records; visible_count := visible_count + 1; current_res.id = core_result.id; current_res.rel = core_result.rel; tmp_int := 1; IF metarecord THEN SELECT COUNT(DISTINCT s.source) INTO tmp_int FROM metabib.metarecord_source_map s WHERE s.metarecord = core_result.id; END IF; IF tmp_int = 1 THEN current_res.record = core_result.records[1]; ELSE current_res.record = NULL; END IF; RETURN NEXT current_res; CONTINUE; END IF; IF param_statuses IS NOT NULL AND array_upper(param_statuses, 1) > 0 THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.status IN ( SELECT * FROM search.explode_array( param_statuses ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all status-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; IF param_locations IS NOT NULL AND array_upper(param_locations, 1) > 0 THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.location IN ( SELECT * FROM search.explode_array( param_locations ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all copy_location-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; IF staff IS NULL OR NOT staff THEN PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) JOIN actor.org_unit a ON (cp.circ_lib = a.id) JOIN asset.copy_location cl ON (cp.location = cl.id) JOIN config.copy_status cs ON (cp.status = cs.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cs.holdable AND cl.opac_visible AND cp.opac_visible AND a.opac_visible AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF NOT FOUND THEN -- RAISE NOTICE ' % were all visibility-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; ELSE PERFORM 1 FROM asset.call_number cn JOIN asset.copy cp ON (cp.call_number = cn.id) JOIN actor.org_unit a ON (cp.circ_lib = a.id) JOIN asset.copy_location cl ON (cp.location = cl.id) JOIN config.copy_status cs ON (cp.status = cs.id) WHERE NOT cn.deleted AND NOT cp.deleted AND cp.circ_lib IN ( SELECT * FROM search.explode_array( search_org_list ) ) AND cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF NOT FOUND THEN PERFORM 1 FROM asset.call_number cn WHERE cn.record IN ( SELECT * FROM search.explode_array( core_result.records ) ) LIMIT 1; IF FOUND THEN -- RAISE NOTICE ' % were all visibility-excluded ... ', core_result.records; excluded_count := excluded_count + 1; CONTINUE; END IF; END IF; END IF; visible_count := visible_count + 1; current_res.id = core_result.id; current_res.rel = core_result.rel; tmp_int := 1; IF metarecord THEN SELECT COUNT(DISTINCT s.source) INTO tmp_int FROM metabib.metarecord_source_map s WHERE s.metarecord = core_result.id; END IF; IF tmp_int = 1 THEN current_res.record = core_result.records[1]; ELSE current_res.record = NULL; END IF; RETURN NEXT current_res; IF visible_count % 1000 = 0 THEN -- RAISE NOTICE ' % visible so far ... ', visible_count; END IF; END LOOP; current_res.id = NULL; current_res.rel = NULL; current_res.record = NULL; current_res.total = total_count; current_res.checked = check_count; current_res.deleted = deleted_count; current_res.visible = visible_count; current_res.excluded = excluded_count; CLOSE core_cursor; RETURN NEXT current_res; END;
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
creator | bigint | ||
create_date | timestamp with time zone | ||
editor | bigint | ||
edit_date | timestamp with time zone | ||
record | bigint | ||
owning_lib | integer | ||
label | text | ||
deleted | boolean | ||
create_date_day | date | ||
edit_date_day | date | ||
create_date_hour | timestamp with time zone | ||
edit_date_hour | timestamp with time zone | ||
item_lang | text | ||
item_type | text | ||
item_form | text |
SELECT cn.id , cn.creator , cn.create_date , cn.editor , cn.edit_date , cn.record , cn.owning_lib , cn.label , cn.deleted , (cn.create_date)::date AS create_date_day , (cn.edit_date)::date AS edit_date_day , date_trunc ('hour'::text , cn.create_date ) AS create_date_hour , date_trunc ('hour'::text , cn.edit_date ) AS edit_date_hour , rd.item_lang , rd.item_type , rd.item_form FROM (asset.call_number cn JOIN metabib.rec_descriptor rd ON ( (rd.record = cn.record) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
usr | integer | ||
xact_start | timestamp with time zone | ||
xact_finish | timestamp with time zone | ||
target_copy | bigint | ||
circ_lib | integer | ||
circ_staff | integer | ||
checkin_staff | integer | ||
checkin_lib | integer | ||
renewal_remaining | integer | ||
due_date | timestamp with time zone | ||
stop_fines_time | timestamp with time zone | ||
checkin_time | timestamp with time zone | ||
duration | interval | ||
fine_interval | interval | ||
recuring_fine | numeric(6,2) | ||
max_fine | numeric(6,2) | ||
phone_renewal | boolean | ||
desk_renewal | boolean | ||
opac_renewal | boolean | ||
duration_rule | text | ||
recuring_fine_rule | text | ||
max_fine_rule | text | ||
stop_fines | text | ||
start_date_day | date | ||
finish_date_day | date | ||
start_date_hour | timestamp with time zone | ||
finish_date_hour | timestamp with time zone | ||
call_number_label | text | ||
owning_lib | integer | ||
item_lang | text | ||
item_type | text | ||
item_form | text |
SELECT c.id , c.usr , c.xact_start , c.xact_finish , c.target_copy , c.circ_lib , c.circ_staff , c.checkin_staff , c.checkin_lib , c.renewal_remaining , c.due_date , c.stop_fines_time , c.checkin_time , c.duration , c.fine_interval , c.recuring_fine , c.max_fine , c.phone_renewal , c.desk_renewal , c.opac_renewal , c.duration_rule , c.recuring_fine_rule , c.max_fine_rule , c.stop_fines , (c.xact_start)::date AS start_date_day , (c.xact_finish)::date AS finish_date_day , date_trunc ('hour'::text , c.xact_start ) AS start_date_hour , date_trunc ('hour'::text , c.xact_finish ) AS finish_date_hour , cp.call_number_label , cp.owning_lib , cp.item_lang , cp.item_type , cp.item_form FROM ("action".circulation c JOIN stats.fleshed_copy cp ON ( (cp.id = c.target_copy) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | ||
circ_lib | integer | ||
creator | bigint | ||
call_number | bigint | ||
editor | bigint | ||
create_date | timestamp with time zone | ||
edit_date | timestamp with time zone | ||
copy_number | integer | ||
status | integer | ||
location | integer | ||
loan_duration | integer | ||
fine_level | integer | ||
age_protect | integer | ||
circulate | boolean | ||
deposit | boolean | ||
ref | boolean | ||
holdable | boolean | ||
deposit_amount | numeric(6,2) | ||
price | numeric(8,2) | ||
barcode | text | ||
circ_modifier | text | ||
circ_as_type | text | ||
dummy_title | text | ||
dummy_author | text | ||
alert_message | text | ||
opac_visible | boolean | ||
deleted | boolean | ||
create_date_day | date | ||
edit_date_day | date | ||
create_date_hour | timestamp with time zone | ||
edit_date_hour | timestamp with time zone | ||
call_number_label | text | ||
owning_lib | integer | ||
item_lang | text | ||
item_type | text | ||
item_form | text |
SELECT cp.id , cp.circ_lib , cp.creator , cp.call_number , cp.editor , cp.create_date , cp.edit_date , cp.copy_number , cp.status , cp."location" , cp.loan_duration , cp.fine_level , cp.age_protect , cp.circulate , cp.deposit , cp.ref , cp.holdable , cp.deposit_amount , cp.price , cp.barcode , cp.circ_modifier , cp.circ_as_type , cp.dummy_title , cp.dummy_author , cp.alert_message , cp.opac_visible , cp.deleted , (cp.create_date)::date AS create_date_day , (cp.edit_date)::date AS edit_date_day , date_trunc ('hour'::text , cp.create_date ) AS create_date_hour , date_trunc ('hour'::text , cp.edit_date ) AS edit_date_hour , cn.label AS call_number_label , cn.owning_lib , rd.item_lang , rd.item_type , rd.item_form FROM ( (asset."copy" cp JOIN asset.call_number cn ON ( (cp.call_number = cn.id) ) ) JOIN metabib.rec_descriptor rd ON ( (rd.record = cn.record) ) );
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
berick | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
record | bigint | NOT NULL | |
tag | character(3) | NOT NULL | |
ind1 | "char" | ||
ind2 | "char" | ||
subfield | "char" | ||
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
record | bigint | ||
item_type | text | ||
item_form | text | ||
bib_level | text | ||
control_type | text | ||
char_encoding | text | ||
enc_level | text | ||
audience | text | ||
lit_form | text | ||
type_mat | text | ||
cat_form | text | ||
pub_status | text | ||
item_lang | text | ||
vr_format | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
creator | integer | NOT NULL DEFAULT 1 | |
editor | integer | NOT NULL DEFAULT 1 | |
source | integer | ||
quality | integer | ||
create_date | timestamp with time zone | NOT NULL DEFAULT now() | |
edit_date | timestamp with time zone | NOT NULL DEFAULT now() | |
active | boolean | NOT NULL DEFAULT true | |
deleted | boolean | NOT NULL DEFAULT false | |
fingerprint | text | ||
tcn_source | text | DEFAULT 'AUTOGEN'::text | |
tcn_value | text | ||
marc | text | NOT NULL | |
last_xact_id | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL DEFAULT nextval('troup.foo'::regclass) | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
createdate | date | ||
barcode | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL DEFAULT nextval('troup.foo'::regclass) | |
record | bigint | NOT NULL | |
tag | character(3) | NOT NULL | |
ind1 | text | ||
ind2 | text | ||
subfield | text | ||
value | text | NOT NULL | |
index_vector | tsvector |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
catkey | integer | ||
itemkey | integer | ||
callnum | text | ||
cat1 | text | ||
cat2 | text | ||
createdate | date | ||
home_location | text | ||
barcode | text | ||
price | numeric(8,2) | ||
item_type | text | ||
owning_lib | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL DEFAULT nextval('troup.foo'::regclass) | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
pines | integer | ||
th | integer |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL DEFAULT nextval('troup.foo'::regclass) | |
record | bigint | ||
item_type | text | ||
item_form | text | ||
bib_level | text | ||
control_type | text | ||
char_encoding | text | ||
enc_level | text | ||
audience | text | ||
lit_form | text | ||
type_mat | text | ||
cat_form | text | ||
pub_status | text | ||
item_lang | text | ||
vr_format | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL | |
creator | integer | NOT NULL | |
editor | integer | NOT NULL | |
source | integer | ||
quality | integer | ||
create_date | timestamp with time zone | NOT NULL | |
edit_date | timestamp with time zone | NOT NULL | |
active | boolean | NOT NULL | |
deleted | boolean | NOT NULL | |
fingerprint | text | ||
tcn_source | text | NOT NULL | |
tcn_value | text | NOT NULL | |
marc | text | NOT NULL | |
last_xact_id | text | NOT NULL |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL DEFAULT nextval('troup.foo'::regclass) | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL DEFAULT nextval('troup.foo'::regclass) | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | NOT NULL DEFAULT nextval('troup.foo'::regclass) | |
source | bigint | NOT NULL | |
field | integer | NOT NULL | |
value | text | NOT NULL | |
index_vector | tsvector |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
field | integer | ||
source | integer | ||
value | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
ind1 | text | ||
ind2 | text | ||
record | integer | ||
subfield | text | ||
tag | text | ||
value | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
field | integer | ||
source | integer | ||
value | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
audience | text | ||
bib_level | text | ||
cat_form | text | ||
char_encoding | text | ||
control_type | text | ||
enc_level | text | ||
item_form | text | ||
item_lang | text | ||
item_type | text | ||
lit_form | text | ||
pub_status | text | ||
record | integer | ||
type_mat | text | ||
vr_format | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
active | boolean | ||
create_date | timestamp with time zone | ||
creator | integer | ||
deleted | boolean | ||
edit_date | timestamp with time zone | ||
editor | integer | ||
fingerprint | text | ||
id | integer | ||
last_xact_id | text | ||
marc | text | ||
quality | integer | ||
source | integer | ||
tcn_source | text | ||
tcn_value | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
field | integer | ||
source | integer | ||
value | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
field | integer | ||
source | integer | ||
value | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
field | integer | ||
source | integer | ||
value | text |
User | |||||||
---|---|---|---|---|---|---|---|
PUBLIC | |||||||
postgres |
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
code | text | UNIQUE NOT NULL | |
description | text | ||
xpath | text | NOT NULL | |
remove | text | NOT NULL DEFAULT ''::text | |
ident | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
vandelay.queued_authority_record_attr.id | matched_attr | integer | |
vandelay.queued_authority_record.id | queued_record | bigint | |
authority.record_entry.id | eg_record | bigint |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('vandelay.queue_id_seq'::regclass) | |
owner | integer | UNIQUE#1 NOT NULL | |
name | text | UNIQUE#1 NOT NULL | |
complete | boolean | NOT NULL DEFAULT false | |
queue_type | text | UNIQUE#1 NOT NULL DEFAULT 'authority'::text |
Table vandelay.authority_queue Inherits queue,
Name | Constraint |
---|---|
authority_queue_queue_type_check | CHECK ((queue_type = 'authority'::text)) |
queue_queue_type_check | CHECK (((queue_type = 'bib'::text) OR (queue_type = 'authority'::text))) |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | serial | PRIMARY KEY | |
code | text | UNIQUE NOT NULL | |
description | text | ||
xpath | text | NOT NULL | |
remove | text | NOT NULL DEFAULT ''::text | |
ident | boolean | NOT NULL DEFAULT false |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
field_type | text | NOT NULL | |
vandelay.queued_bib_record_attr.id | matched_attr | integer | |
vandelay.queued_bib_record.id | queued_record | bigint | |
biblio.record_entry.id | eg_record | bigint |
Name | Constraint |
---|---|
bib_match_field_type_check | CHECK ((((field_type = 'isbn'::text) OR (field_type = 'tcn_value'::text)) OR (field_type = 'id'::text))) |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('vandelay.queue_id_seq'::regclass) | |
owner | integer | UNIQUE#1 NOT NULL | |
name | text | UNIQUE#1 NOT NULL | |
complete | boolean | NOT NULL DEFAULT false | |
queue_type | text | UNIQUE#1 NOT NULL DEFAULT 'bib'::text | |
vandelay.import_item_attr_definition.id | item_attr_def | bigint |
Table vandelay.bib_queue Inherits queue,
Name | Constraint |
---|---|
bib_queue_queue_type_check | CHECK ((queue_type = 'bib'::text)) |
queue_queue_type_check | CHECK (((queue_type = 'bib'::text) OR (queue_type = 'authority'::text))) |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.org_unit.id | owner | integer | UNIQUE#1 NOT NULL |
field | text | UNIQUE#1 NOT NULL |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
vandelay.queued_bib_record.id | record | bigint | NOT NULL |
vandelay.import_item_attr_definition.id | definition | bigint | NOT NULL |
owning_lib | integer | ||
circ_lib | integer | ||
call_number | text | ||
copy_number | integer | ||
status | integer | ||
location | integer | ||
circulate | boolean | ||
deposit | boolean | ||
deposit_amount | numeric(8,2) | ||
ref | boolean | ||
holdable | boolean | ||
price | numeric(8,2) | ||
barcode | text | ||
circ_modifier | text | ||
circ_as_type | text | ||
alert_message | text | ||
pub_note | text | ||
priv_note | text | ||
opac_visible | boolean |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.org_unit.id | owner | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
tag | text | NOT NULL | |
keep | boolean | NOT NULL DEFAULT false | |
owning_lib | text | ||
circ_lib | text | ||
call_number | text | ||
copy_number | text | ||
status | text | ||
location | text | ||
circulate | text | ||
deposit | text | ||
deposit_amount | text | ||
ref | text | ||
holdable | text | ||
price | text | ||
barcode | text | ||
circ_modifier | text | ||
circ_as_type | text | ||
alert_message | text | ||
opac_visible | text | ||
pub_note_title | text | ||
pub_note | text | ||
priv_note_title | text | ||
priv_note | text |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
actor.usr.id | owner | integer | UNIQUE#1 NOT NULL |
name | text | UNIQUE#1 NOT NULL | |
complete | boolean | NOT NULL DEFAULT false | |
queue_type | text | UNIQUE#1 NOT NULL DEFAULT 'bib'::text |
Name | Constraint |
---|---|
queue_queue_type_check | CHECK (((queue_type = 'bib'::text) OR (queue_type = 'authority'::text))) |
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('vandelay.queued_record_id_seq'::regclass) | |
create_time | timestamp with time zone | NOT NULL DEFAULT now() | |
import_time | timestamp with time zone | ||
purpose | text | NOT NULL DEFAULT 'import'::text | |
marc | text | NOT NULL | |
vandelay.authority_queue.id | queue | integer | NOT NULL |
authority.record_entry.id | imported_as | integer |
Table vandelay.queued_authority_record Inherits queued_record,
Name | Constraint |
---|---|
queued_record_purpose_check | CHECK (((purpose = 'import'::text) OR (purpose = 'overlay'::text))) |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
vandelay.queued_authority_record.id | record | bigint | NOT NULL |
vandelay.authority_attr_definition.id | field | integer | NOT NULL |
attr_value | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigint | PRIMARY KEY DEFAULT nextval('vandelay.queued_record_id_seq'::regclass) | |
create_time | timestamp with time zone | NOT NULL DEFAULT now() | |
import_time | timestamp with time zone | ||
purpose | text | NOT NULL DEFAULT 'import'::text | |
marc | text | NOT NULL | |
vandelay.bib_queue.id | queue | integer | NOT NULL |
config.bib_source.id | bib_source | integer | |
biblio.record_entry.id | imported_as | integer |
Table vandelay.queued_bib_record Inherits queued_record,
Name | Constraint |
---|---|
queued_record_purpose_check | CHECK (((purpose = 'import'::text) OR (purpose = 'overlay'::text))) |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
vandelay.queued_bib_record.id | record | bigint | NOT NULL |
vandelay.bib_attr_definition.id | field | integer | NOT NULL |
attr_value | text | NOT NULL |
Tables referencing this one via Foreign Key Constraints:
F-Key | Name | Type | Description |
---|---|---|---|
id | bigserial | PRIMARY KEY | |
create_time | timestamp with time zone | NOT NULL DEFAULT now() | |
import_time | timestamp with time zone | ||
purpose | text | NOT NULL DEFAULT 'import'::text | |
marc | text | NOT NULL |
Name | Constraint |
---|---|
queued_record_purpose_check | CHECK (((purpose = 'import'::text) OR (purpose = 'overlay'::text))) |
BEGIN DELETE FROM vandelay.queued_authority_record_attr WHERE record = OLD.id; IF TG_OP = 'UPDATE' THEN RETURN NEW; END IF; RETURN OLD; END;
BEGIN DELETE FROM vandelay.queued_bib_record_attr WHERE record = OLD.id; DELETE FROM vandelay.import_item WHERE record = OLD.id; IF TG_OP = 'UPDATE' THEN RETURN NEW; END IF; RETURN OLD; END;
DECLARE value TEXT; atype TEXT; adef RECORD; BEGIN FOR adef IN SELECT * FROM vandelay.authority_attr_definition LOOP SELECT extract_marc_field('vandelay.queued_authority_record', id, adef.xpath, adef.remove) INTO value FROM vandelay.queued_authority_record WHERE id = NEW.id; IF (value IS NOT NULL AND value <> '') THEN INSERT INTO vandelay.queued_authority_record_attr (record, field, attr_value) VALUES (NEW.id, adef.id, value); END IF; END LOOP; RETURN NULL; END;
DECLARE queue_rec RECORD; item_rule RECORD; item_data vandelay.import_item%ROWTYPE; BEGIN SELECT * INTO queue_rec FROM vandelay.bib_queue WHERE id = NEW.queue; FOR item_rule IN SELECT r.* FROM actor.org_unit_ancestors( queue_rec.owner ) o JOIN vandelay.import_item_attr_definition r ON ( r.owner = o.id ) LOOP FOR item_data IN SELECT * FROM vandelay.ingest_items( NEW.id::BIGINT, item_rule.id::BIGINT ) LOOP INSERT INTO vandelay.import_item ( record, definition, owning_lib, circ_lib, call_number, copy_number, status, location, circulate, deposit, deposit_amount, ref, holdable, price, barcode, circ_modifier, circ_as_type, alert_message, pub_note, priv_note, opac_visible ) VALUES ( NEW.id, item_data.definition, item_data.owning_lib, item_data.circ_lib, item_data.call_number, item_data.copy_number, item_data.status, item_data.location, item_data.circulate, item_data.deposit, item_data.deposit_amount, item_data.ref, item_data.holdable, item_data.price, item_data.barcode, item_data.circ_modifier, item_data.circ_as_type, item_data.alert_message, item_data.pub_note, item_data.priv_note, item_data.opac_visible ); END LOOP; END LOOP; RETURN NULL; END;
DECLARE value TEXT; atype TEXT; adef RECORD; BEGIN FOR adef IN SELECT * FROM vandelay.bib_attr_definition LOOP SELECT extract_marc_field('vandelay.queued_bib_record', id, adef.xpath, adef.remove) INTO value FROM vandelay.queued_bib_record WHERE id = NEW.id; IF (value IS NOT NULL AND value <> '') THEN INSERT INTO vandelay.queued_bib_record_attr (record, field, attr_value) VALUES (NEW.id, adef.id, value); END IF; END LOOP; RETURN NULL; END;
DECLARE owning_lib TEXT; circ_lib TEXT; call_number TEXT; copy_number TEXT; status TEXT; location TEXT; circulate TEXT; deposit TEXT; deposit_amount TEXT; ref TEXT; holdable TEXT; price TEXT; barcode TEXT; circ_modifier TEXT; circ_as_type TEXT; alert_message TEXT; opac_visible TEXT; pub_note TEXT; priv_note TEXT; attr_def RECORD; tmp_attr_set RECORD; attr_set vandelay.import_item%ROWTYPE; xpath TEXT; BEGIN SELECT * INTO attr_def FROM vandelay.import_item_attr_definition WHERE id = attr_def_id; IF FOUND THEN attr_set.definition := attr_def.id; -- Build the combined XPath owning_lib := CASE WHEN attr_def.owning_lib IS NULL THEN 'null()' WHEN LENGTH( attr_def.owning_lib ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.owning_lib || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.owning_lib END; circ_lib := CASE WHEN attr_def.circ_lib IS NULL THEN 'null()' WHEN LENGTH( attr_def.circ_lib ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.circ_lib || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.circ_lib END; call_number := CASE WHEN attr_def.call_number IS NULL THEN 'null()' WHEN LENGTH( attr_def.call_number ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.call_number || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.call_number END; copy_number := CASE WHEN attr_def.copy_number IS NULL THEN 'null()' WHEN LENGTH( attr_def.copy_number ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.copy_number || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.copy_number END; status := CASE WHEN attr_def.status IS NULL THEN 'null()' WHEN LENGTH( attr_def.status ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.status || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.status END; location := CASE WHEN attr_def.location IS NULL THEN 'null()' WHEN LENGTH( attr_def.location ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.location || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.location END; circulate := CASE WHEN attr_def.circulate IS NULL THEN 'null()' WHEN LENGTH( attr_def.circulate ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.circulate || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.circulate END; deposit := CASE WHEN attr_def.deposit IS NULL THEN 'null()' WHEN LENGTH( attr_def.deposit ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.deposit || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.deposit END; deposit_amount := CASE WHEN attr_def.deposit_amount IS NULL THEN 'null()' WHEN LENGTH( attr_def.deposit_amount ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.deposit_amount || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.deposit_amount END; ref := CASE WHEN attr_def.ref IS NULL THEN 'null()' WHEN LENGTH( attr_def.ref ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.ref || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.ref END; holdable := CASE WHEN attr_def.holdable IS NULL THEN 'null()' WHEN LENGTH( attr_def.holdable ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.holdable || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.holdable END; price := CASE WHEN attr_def.price IS NULL THEN 'null()' WHEN LENGTH( attr_def.price ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.price || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.price END; barcode := CASE WHEN attr_def.barcode IS NULL THEN 'null()' WHEN LENGTH( attr_def.barcode ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.barcode || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.barcode END; circ_modifier := CASE WHEN attr_def.circ_modifier IS NULL THEN 'null()' WHEN LENGTH( attr_def.circ_modifier ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.circ_modifier || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.circ_modifier END; circ_as_type := CASE WHEN attr_def.circ_as_type IS NULL THEN 'null()' WHEN LENGTH( attr_def.circ_as_type ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.circ_as_type || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.circ_as_type END; alert_message := CASE WHEN attr_def.alert_message IS NULL THEN 'null()' WHEN LENGTH( attr_def.alert_message ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.alert_message || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.alert_message END; opac_visible := CASE WHEN attr_def.opac_visible IS NULL THEN 'null()' WHEN LENGTH( attr_def.opac_visible ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.opac_visible || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.opac_visible END; pub_note := CASE WHEN attr_def.pub_note IS NULL THEN 'null()' WHEN LENGTH( attr_def.pub_note ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.pub_note || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.pub_note END; priv_note := CASE WHEN attr_def.priv_note IS NULL THEN 'null()' WHEN LENGTH( attr_def.priv_note ) = 1 THEN '//*[@tag="' || attr_def.tag || '"]/*[@code="' || attr_def.priv_note || '"]' ELSE '//*[@tag="' || attr_def.tag || '"]/*' || attr_def.priv_note END; xpath := owning_lib || '|' || circ_lib || '|' || call_number || '|' || copy_number || '|' || status || '|' || location || '|' || circulate || '|' || deposit || '|' || deposit_amount || '|' || ref || '|' || holdable || '|' || price || '|' || barcode || '|' || circ_modifier || '|' || circ_as_type || '|' || alert_message || '|' || pub_note || '|' || priv_note || '|' || opac_visible; -- RAISE NOTICE 'XPath: %', xpath; FOR tmp_attr_set IN SELECT * FROM xpath_table( 'id', 'marc', 'vandelay.queued_bib_record', xpath, 'id = ' || import_id ) AS t( id BIGINT, ol TEXT, clib TEXT, cn TEXT, cnum TEXT, cs TEXT, cl TEXT, circ TEXT, dep TEXT, dep_amount TEXT, r TEXT, hold TEXT, pr TEXT, bc TEXT, circ_mod TEXT, circ_as TEXT, amessage TEXT, note TEXT, pnote TEXT, opac_vis TEXT ) LOOP tmp_attr_set.pr = REGEXP_REPLACE(tmp_attr_set.pr, E'[^0-9\\.]', '', 'g'); tmp_attr_set.dep_amount = REGEXP_REPLACE(tmp_attr_set.dep_amount, E'[^0-9\\.]', '', 'g'); tmp_attr_set.pr := NULLIF( tmp_attr_set.pr, '' ); tmp_attr_set.dep_amount := NULLIF( tmp_attr_set.dep_amount, '' ); SELECT id INTO attr_set.owning_lib FROM actor.org_unit WHERE shortname = UPPER(tmp_attr_set.ol); -- INT SELECT id INTO attr_set.circ_lib FROM actor.org_unit WHERE shortname = UPPER(tmp_attr_set.clib); -- INT SELECT id INTO attr_set.status FROM config.copy_status WHERE LOWER(name) = LOWER(tmp_attr_set.cs); -- INT SELECT id INTO attr_set.location FROM asset.copy_location WHERE LOWER(name) = LOWER(tmp_attr_set.cl) AND owning_lib = COALESCE(attr_set.owning_lib, attr_set.circ_lib); -- INT attr_set.circulate := LOWER( SUBSTRING( tmp_attr_set.circ, 1, 1)) IN ('t','y','1') OR LOWER(tmp_attr_set.circ) = 'circulating'; -- BOOL attr_set.deposit := LOWER( SUBSTRING( tmp_attr_set.dep, 1, 1 ) ) IN ('t','y','1') OR LOWER(tmp_attr_set.dep) = 'deposit'; -- BOOL attr_set.holdable := LOWER( SUBSTRING( tmp_attr_set.hold, 1, 1 ) ) IN ('t','y','1') OR LOWER(tmp_attr_set.hold) = 'holdable'; -- BOOL attr_set.opac_visible := LOWER( SUBSTRING( tmp_attr_set.opac_vis, 1, 1 ) ) IN ('t','y','1') OR LOWER(tmp_attr_set.opac_vis) = 'visible'; -- BOOL attr_set.ref := LOWER( SUBSTRING( tmp_attr_set.r, 1, 1 ) ) IN ('t','y','1') OR LOWER(tmp_attr_set.r) = 'reference'; -- BOOL attr_set.copy_number := tmp_attr_set.cnum::INT; -- INT, attr_set.deposit_amount := tmp_attr_set.dep_amount::NUMERIC(6,2); -- NUMERIC(6,2), attr_set.price := tmp_attr_set.pr::NUMERIC(8,2); -- NUMERIC(8,2), attr_set.call_number := tmp_attr_set.cn; -- TEXT attr_set.barcode := tmp_attr_set.bc; -- TEXT, attr_set.circ_modifier := tmp_attr_set.circ_mod; -- TEXT, attr_set.circ_as_type := tmp_attr_set.circ_as; -- TEXT, attr_set.alert_message := tmp_attr_set.amessage; -- TEXT, attr_set.pub_note := tmp_attr_set.note; -- TEXT, attr_set.priv_note := tmp_attr_set.pnote; -- TEXT, attr_set.alert_message := tmp_attr_set.amessage; -- TEXT, RETURN NEXT attr_set; END LOOP; END IF; END;
DECLARE attr RECORD; eg_rec RECORD; BEGIN FOR attr IN SELECT a.* FROM vandelay.queued_bib_record_attr a JOIN vandelay.bib_attr_definition d ON (d.id = a.field) WHERE record = NEW.id AND d.ident IS TRUE LOOP -- All numbers? check for an id match IF (attr.attr_value ~ $r$^\d+$$r$) THEN FOR eg_rec IN SELECT * FROM biblio.record_entry WHERE id = attr.attr_value::BIGINT AND deleted IS FALSE LOOP INSERT INTO vandelay.bib_match (field_type, matched_attr, queued_record, eg_record) VALUES ('id', attr.id, NEW.id, eg_rec.id); END LOOP; END IF; -- Looks like an ISBN? check for an isbn match IF (attr.attr_value ~* $r$^[0-9x]+$$r$ AND character_length(attr.attr_value) IN (10,13)) THEN FOR eg_rec IN EXECUTE $$SELECT * FROM metabib.full_rec fr WHERE fr.value LIKE LOWER('$$ || attr.attr_value || $$%') AND fr.tag = '020' AND fr.subfield = 'a'$$ LOOP PERFORM id FROM biblio.record_entry WHERE id = eg_rec.record AND deleted IS FALSE; IF FOUND THEN INSERT INTO vandelay.bib_match (field_type, matched_attr, queued_record, eg_record) VALUES ('isbn', attr.id, NEW.id, eg_rec.record); END IF; END LOOP; -- subcheck for isbn-as-tcn FOR eg_rec IN SELECT * FROM biblio.record_entry WHERE tcn_value = 'i' || attr.attr_value AND deleted IS FALSE LOOP INSERT INTO vandelay.bib_match (field_type, matched_attr, queued_record, eg_record) VALUES ('tcn_value', attr.id, NEW.id, eg_rec.id); END LOOP; END IF; -- check for an OCLC tcn_value match IF (attr.attr_value ~ $r$^o\d+$$r$) THEN FOR eg_rec IN SELECT * FROM biblio.record_entry WHERE tcn_value = regexp_replace(attr.attr_value,'^o','ocm') AND deleted IS FALSE LOOP INSERT INTO vandelay.bib_match (field_type, matched_attr, queued_record, eg_record) VALUES ('tcn_value', attr.id, NEW.id, eg_rec.id); END LOOP; END IF; -- check for a direct tcn_value match FOR eg_rec IN SELECT * FROM biblio.record_entry WHERE tcn_value = attr.attr_value AND deleted IS FALSE LOOP INSERT INTO vandelay.bib_match (field_type, matched_attr, queued_record, eg_record) VALUES ('tcn_value', attr.id, NEW.id, eg_rec.id); END LOOP; END LOOP; RETURN NULL; END;
use MARC::Record; use MARC::File::XML; my $xml = shift; my $field_spec = shift; my $r = MARC::Record->new_from_xml( $xml ); $r->delete_field( $_ ) for ( $r->field( $field_spec ) ); $xml = $r->as_xml_record; $xml =~ s/^<\?.+?\?>$//mo; $xml =~ s/\n//sgo; $xml =~ s/>\s+</></sgo; return $xml;
Generated by PostgreSQL Autodoc