E-Kernel Required Reading |
Table of ContentsE-Kernel Required Reading Abstract References Introduction EK subsystem components EK science plan component EK sequence component EK experimenter's notebook component Sequence EK Concepts Relational database functionality The sequence EK data model Tables Column attributes The EK query language Query syntax The SELECT clause The FROM clause The WHERE clause The ORDER BY Clause Case sensitivity White space Numeric values String values Time values Null values Reserved Words Query grammar Examples of syntactically valid queries Examples of syntactically invalid queries Examples of semantically invalid queries Sequence EK Files Segments The comment area Sequence EK tools INSPEKT COMMNT TOXFR and TOBIN SPACIT Reading sequence EKs Loading and unloading sequence EKs Query-and-fetch interface Issuing queries Fetching data from matching rows Query support utilities Record-oriented reader interface Opening files for record-oriented reading Column entry readers Informational functions Summarizing EK files Summarizing loaded tables Writing sequence EKs Introduction Opening a sequence EK for writing Beginning a new sequence EK Opening an existing sequence EK for writing Choosing a writing method Specifying segment attributes Table and column names Column declarations Consistency of schemas Using the record-oriented sequence EK writers Beginning a segment Adding records to a segment Using the fast writers Initiating a fast write Adding columns to the segment Completing a fast write Restrictions Updating an existing sequence EK Closing a sequence EK Appendix A --- Summary of E-kernel Functions Summary of mnemonics Summary of Calling Sequences Revisions March 23, 2016 NJB (JPL) February 24, 2010 EDW (JPL) April 1, 2009 Feb. 06, 2002 Jan. 15, 2002 E-Kernel Required Reading
Abstract
References
Introduction
The SPICE E-kernel (EK) subsystem is intended to support convenient recording, electronic transfer, archival, examination, and manipulation of event data by human users and software. Because the form, content, and quantity of event data may vary widely from one mission or application to next, the EK subsystem emphasizes flexibility in accommodating event data and imposes few restrictions on the types of data that can be included within the subsystem. The EK subsystem includes two separate software mechanisms for storing and handling event data. One of these is a simple, stand-alone relational database system. This system includes event data files, SPICE software that manipulates those files, and documentation. The data files used by this system are called ``sequence E-kernels,'' ``sequence EK files'' or ``sequence EKs''; often the qualifier ``sequence'' is omitted. SPICE EK software enables sequence EKs to be examined, interactively or through an application programming interface (API), by means of a simple, SQL-like query language. The sequence EK file format and associated software are discussed in detail below. The second mechanism is an e-mail and web-based software system that allows users to archive and share text notes and e-mail messages, the latter of which may optionally include MIME attachments. This is known as the ``experimenter's notebook'' or ``ENB'' system. The ENB system is documented in reference [A]. While the EK subsystem allows users to package an almost limitless variety of event data, the subsystem is designed to support in particular three of the categories of data listed above:
EK subsystem components
EK science plan component
Depending on mission requirements, either the ENB or Sequence EK system may be suitable for storing Science Plan data. EK sequence component
Data stored in the sequence component of the EK subsystem might represent sequences of time-tagged ``events.'' Sequences of commands sent to a spacecraft are an example of such event data. Terse notes indicating occurrences of geometric events such as equator crossings or times of closest approach of a spacecraft relative to a target are another example of suitable data to include in this component. When event data consist of or include descriptions of state changes of systems of interest, a sequence EK containing these data could be used to find the states of the corresponding systems at a given time. The data comprising an event may correspond to a row in a table, and attributes of the event could be represented by entries in different columns within the row. A trivial, fictitious example of this sort of logical organization is shown in the table below:
(column 1) (column 2) (column 3) TIME MNEMONIC EVENT +------------------------------------------------------+ (row 1) | 3987:64:2 | CMD,PWRON | Turn camera power on | +------------------------------------------------------+ (row 2) | 3989:01:0 | CMD,FILCLR | Select CLEAR filter | +------------------------------------------------------+ (row 3) | 4000:01:5 | CMD,SHUTR | Shutter photo | +------------------------------------------------------+ (row 4) | 4000:01:5 | COMMENT | OPNAV photo #1 complete | +------------------------------------------------------+ . . . . . .With regard to such a table, we might wish to construct queries such as:
"Find the filter selection commands that occurred between spacecraft clock times 5000:23:0 and 5001:00:0" "Find the events containing the word ``camera'' and display them ordered by mnemonic" "Find the last event description starting with the string ``Turn'' prior to the UTC time 1-JAN-1997 12:14:02" "Find the times of all the ``Shutter photo'' events"We might want to display the rows satisfying these queries on our terminals, dump them to a file, or use them to drive a program. All of these functions are supported by the sequence EK subsystem. Note that the queries shown above are English paraphrases of the equivalent expressions in the EK query language. The functional capabilities described above are provided by files and software capable of accessing those files. The EK API contains ``writer'' software that enables users to create sequence EK files that contain data organized in a tabular fashion. The data then can be accessed using ``reader'' functions from the EK API, or interactively using the EK browsing program INSPEKT. Sequence EK files are binary files and therefore cannot be read directly using text editing programs. However, the program INSPEKT can dump any selected portion of any sequence EK as a text file, using user-specified formats, so in a sense sequence EK files are more flexible than flat text files as a repository for event information. By using a database-style internal data representation rather than a format-oriented one, they avoid the constraints on their contents that would be imposed by adoption of fixed file formats. Sequence EK files may be ported between computer systems having different internal data formats; the SPICE Toolkit utilities TOBIN and TOXFR support this function. Sequence EK files may also have labels and free-form text inserted into them to assist in clear and complete identification of the files; the SPICE Toolkit utility COMMNT may be used for this purpose. A detailed discussion of the functional characteristics of the sequence component is given below in the chapter titled ``Sequence EK Concepts.'' EK experimenter's notebook component
More generally, the experimenter's notebook component may include any EK data that don't fit into the other two components. For example, if the available, human-readable command sequence data are extremely small in volume, it may be more practical to include them in the experimenter's notebook than to insert them into a binary sequence EK. Sequence EK ConceptsRelational database functionality
The sequence EK subsystem provides an application programming interface (API) for creating, modifying, reading, summarizing, and annotating sequence EK files. In particular, the API supports reading using a query-and-fetch mechanism: an application passes a request for data called a ``query'' to the EK subsystem, then retrieves the data using a suite of API routines. Queries are expressed in a simple language that closely resembles the standard relational database query language SQL. The sequence EK query capability is also provided using the CSPICE interactive browsing utility INSPEKT. However, INSPEKT does not support any sequence EK writing functionality. The functionality of sequence EK software is almost completely independent of its intended application as a system for handling event data. One could think of the software system not as an ``event kernel'' but simply as a ``database kernel,'' and in fact the term ``database kernel'' and the acronym ``DBK'' have been used in some CSPICE documentation. However, since the ``EK'' prefix has already been widely used in naming functions belonging to the EK API, we'll stick with the name ``EK'' in our discussion. The sequence EK data model
Tables
The sequence EK data model diverges slightly from the relational model in that columns are allowed to have arrays as entries. We call such columns ``array-valued'' or ``vector-valued.'' When a column entry is an array, we call the components of the array ``column entry elements'' or ``elements'' if the context is clear. Column attributes
The EK query languageQuery syntax
The selected data will be retrievable using the EK fetch routines ekgc_c, ekgd_c, and ekgi_c. The query consists of four clauses, the third and fourth of which are optional. The general form of a query is
SELECT <column list> FROM <table list> [WHERE <constraint list>] [ORDER BY <ORDER BY column list>]where brackets indicate optional items. The elements of the query shown above are called, respectively, the ``SELECT clause,'' the ``FROM clause,'' the ``WHERE clause,'' and the ``ORDER BY clause.'' The result of a query may be thought of as a new table, whose columns are those specified in the SELECT clause and whose rows are those satisfying the constraints of the WHERE clause, ordered according to the ORDER BY clause. The SELECT clause
The form of a SELECT clause is
SELECT <column name> [ , <column name>...]In queries having multiple tables in the FROM clause (see below), column names are ambiguous if they occur in more than one table in the FROM clause. Such column names must be qualified with table identifiers. These identifiers may be the names of the tables to which the columns belong, or table ``aliases,'' names (usually short ones) associated with tables in the FROM clause. Table aliases have duration limited to the execution of the query to which they belong. The form of a qualified column name is
<table name>.<column name>or
<table alias>.<column name>Columns named in the SELECT clause must be present in some loaded EK for the query to be semantically valid. The FROM clause
FROM <table name>In queries involving multiple tables, the form of the FROM clause becomes
FROM <table name> [<table alias>] [ , <table name> [<table alias>] ... ]The aliases associated with the table names must be distinct and must not be the actual names of loaded EK tables. Queries involving multiple tables are called ``joins.'' The meaning of a FROM clause containing multiple tables is that the output is to be a subset of the rows of the Cartesian product of the listed tables. Normally, WHERE clause constraints are supplied to reduce the selected rows to a set of interest. The most common example of a join is a query with two tables listed in the FROM clause, and a WHERE clause constraint enforcing equality of members of a column in the first table with members of column in the second table. Such a query is called an ``equi-join.'' A join in which columns of different tables are related by an inequality is called a ``non-equi-join.'' Any type of join other than an equi-join may be very slow to evaluate, due to the large number of elements that may be contained in the Cartesian product of the listed tables. The WHERE clause
WHERE <constraint expression>where each <constraint expression> consists of one or more simple relational expressions of the form
<column name> <operator> <RHS symbol>Here
<RHS symbol>is a column name, a literal value, or the special symbol
NULLand
<operator>is any of
EQ, GE, GT, LE, LIKE, LT, NE, NOT LIKE, <, <=, =, >, >=, !=, <>For comparison with null values, the special expressions
<column name> IS NULL <column name> IS NOT NULLare allowed. The LIKE operator allows comparison of a string value against a template. The template syntax is that allowed by the CSPICE routine MATCHI. Templates may include literal characters, the wild string marker '*', and the wild character marker '%'. Case is significant in templates. Templates are bracketed by quote characters, just as are literal strings. The query language also supports the BETWEEN and NOT BETWEEN constructs
<column> BETWEEN <symbol 1> AND <symbol 2> <column> NOT BETWEEN <symbol 1> AND <symbol 2>The tokens
<symbol 1> <symbol 2>may be literal values or column names. The BETWEEN operator considers values that match the bounds to satisfy the condition: the BETWEEN operator tests for inclusion in the closed interval defined by the bounds. The order of the bounds doesn't matter: the bounds are considered to define the interval from the smaller bound to the larger. In the WHERE clause, simple relational expressions may be combined using the logical operators AND, OR, and NOT, as in the Fortran programming language. Parentheses may be used to enforce a desired order of evaluation of logical expressions. The expression syntax is NOT symmetric: literal values must not appear on the left hand side of the operators that apply to them. Data types of the columns or constants used on the right-hand-sides of operators must match the data types of the corresponding columns on the left-hand-sides, except that comparison of integer and double precision quantities is permitted. The columns named in a WHERE clause must belong to the tables listed in the FROM clause. If the query is a join, qualifying table names or aliases are required wherever their omission would result in ambiguity. Columns referenced in a WHERE clause must be scalar-valued. The ORDER BY Clause
For each ORDER BY column, the keywords ASC or DESC may be supplied to indicate whether the items in that column are to be listed in ascending or descending order. Ascending order is the default. The direction in which data items increase is referred to as the ``order sense.'' The ORDER BY clause, if present, must appear last in the query. The form of the ORDER BY clause is
ORDER BY <column name> [<order sense>] [ ,<column name> [<order sense>]...]Rows satisfying the query constraints will be returned so that the entries of the first column specified in the ORDER BY clause will appear in the order specified by the order sense keyword, which is assumed to be ASC if absent. When entries in the first through Nth ORDER BY column are equal, the entries in the (N+1)st ORDER BY column determine the order of the rows, and so on. As in the WHERE clause, ORDER BY column names must be qualified by table names or table aliases where they would otherwise be ambiguous. In order for a column to be eligible to be referenced in an ORDER BY clause, the column must scalar valued. Case sensitivity
"And"and
"and"are not considered to be equal. On the other hand, the expression
ANIMAL LIKE "*A*"would be considered true when ANIMAL takes the value
"cat"Case is not significant in time values. White space
Within string constants, leading or embedded white space is significant. Elsewhere, any string of one or more consecutive blanks is interpreted as a single blank. White space is required to separate alphanumeric tokens, such as
SELECTand
LTWhite space may be omitted between special characters and alphanumeric tokens, such as
)and
WHERE Numeric values
The equality operator EQ indicates a test for exact equality. Care must be taken in testing double precision column entries for equality with a specified value; round-off errors may cause such tests to fail unexpectedly. String values
SSI_EVENTin the query
* where event_type eq "SSI_EVENT"are always bracketed by quotation marks. Either single or double quotes may be used, as long as the string is started and terminated with the same character. Within character string values, quote characters must be doubled in order to be recognized. Time values
When SCLK strings are used, they must be prefixed by a substring indicating the name of the clock, followed by the token SCLK. For example:
MGS SCLK 2400001.125Time values specified in queries are always converted to barycentric dynamical time (TDB) before comparisons with column entries are performed. Therefore, programs using the EK subsystem should load a leapseconds kernel and any appropriate SCLK kernels before attempting to issue queries involving time values to the EK subsystem. See [222] and [225] for further information on time conversions. As with double precision values, time values cannot generally be reliably tested for exact equality with column entries. It's usually better to test whether a time column entry is in a desired range than to test whether it's equal to a specific value. Null values
NULLmay be used on the right-hand-side of relational expression in which the column named on the left-hand of the expression allows null values, when the relational operators are either of
IS NULL IS NOT NULLThe case of the letters in the symbol ``NULL'' is not significant. The symbol is written without quotes. Reserved Words
ALL AND ASC AVG BETWEEN BY COUNT DESC DISTINCT EQ FROM GE GROUP GT HAVING IS LE LIKE LT MAX MIN NE NOT NULL OR ORDER SELECT SUM WHERESome of the above are not currently used but are reserved for upward compatibility. Reserved words must be separated from other words in queries by white space. Query grammar
<QUERY> => <SELECT clause> <FROM clause> <WHERE clause> <ORDER BY clause> <SELECT clause> => SELECT <select list> <select list> => <column entry> | <select list>, <column entry> <column entry> => <table name>.<column name> | <column name> <FROM clause> => FROM <table name list> <table name list> => <table entry> | <table name list>, <table entry> <table entry> => <table name> | <table name> <table alias> <WHERE clause> => WHERE <relational expression> | <NIL> <relational expression> => <simple expression> | <NULL value expression> | NOT <relational expression> | ( <relational expression> ) | <relational expression> AND <relational expression> | <relational expression> OR <relational expression> <simple expression> => <LHS> <operator> <RHS> | <LHS> BETWEEN <RHS> AND <RHS> | <LHS> NOT BETWEEN <RHS> AND <RHS> <NULL value expression> => <LHS> <Null operator> NULL <LHS> => <name> <RHS> => <name> | <value> <name> => <identifier> . <identifier> | <identifier> <operator> => EQ | GE | GT | LE | LT | NE | LIKE | NOT LIKE | = | >= | > | <= | < | != | <> <NULL operator> => IS | IS NOT | EQ | NE | = | != | <> <value> => <character value> | <d.p. value> | <integer value> <ORDER BY clause> => ORDER BY <order-by list> | <NIL> <order-by list> => <order-by column entry> | <order-by list>, <order-by column entry> <order-by column entry> => <column entry> <order> | <column entry> <order> => ASC | DESC Examples of syntactically valid queries
The column names referenced in the queries are used as examples and are not meant to suggest that columns having those names will be present in any particular EKs.
SELECT COL1 FROM TAB1 select col1 from tab1 where col1 gt 5 SELECT COL2 FROM TAB1 WHERE COL2 > 5.7 ORDER BY COL2 SELECT COL2 FROM TAB1 WHERE COL1 != 5 SELECT COL2 FROM TAB1 WHERE COL1 GE COL2 SELECT COL1, COL2, COL3 FROM TAB1 ORDER BY COL1 SELECT COL3 FROM TAB1 WHERE COL5 EQ "ABC" SELECT COL3 FROM TAB1 WHERE COL5 = "ABC" SELECT COL3 FROM TAB1 WHERE COL5 LIKE 'A*' SELECT COL3 FROM TAB1 WHERE COL5 LIKE 'A%%' SELECT COL4 FROM TAB1 WHERE COL4 = '1995 JAN 1 12:38:09.7' SELECT COL4 FROM TAB1 WHERE COL4 = "1995 JAN 1 12:38:09.7" SELECT COL4 FROM TAB1 WHERE COL4 NE 'GLL SCLK 02724646:67:7:2' SELECT COL1 FROM TAB1 WHERE COL1 != NULL SELECT COL1 FROM TAB1 WHERE COL1 IS NULL SELECT COL1 FROM TAB1 WHERE COL1 IS NOT NULL SELECT COL1, COL2, COL3 FROM TAB1 WHERE (COL1 BETWEEN 4 AND 6) AND (COL3 NOT LIKE "A%%") ORDER BY COL1, COL3 SELECT COL4 FROM TAB1 WHERE COL4 BETWEEN "1995 JAN 1 12:38" AND "October 23, 1995" SELECT COL1, COL2 FROM TAB1 WHERE NOT ( ( ( COL1 < COL2 ) AND ( COL1 > 5 ) ) OR ( ( COL1 >= COL2 ) AND ( COL2 <= 10 ) ) ) SELECT T1.COL1, T1.COL2, T2.COL2, T2.COL3 FROM TABLE1 T1, TABLE2 T2 WHERE T1.COL1 = T2.COL1 AND T1.COL2 > 5 ORDER BY T1.COL1, T2.COL2 Examples of syntactically invalid queries
SELECT TIME WHERE TIME LT 1991 JAN 1 {FROM clause is absent} select time from table1 where time lt 1991 jan 1 {time string is not quoted} select time from table1 where time .lt. '1991 jan 1' {operator should be lt} select cmd from table1 where "cmd,6tmchg" != cmd {value is on left side of operator} select event_type from table1 where event_type eq "" {quoted string is empty ---use " " to indicate a blank string} select event_type from table1 where event_type = "COMMENT" order TIME {ORDER BY phrase is lacking BY keyword} select COL1 from table where COL1 eq MOC_EVENT {literal string on right-hand-side of operator is not quoted} Examples of semantically invalid queries
TABLE1: ========== Column name Data type Size Indexed? ----------- --------- ---- -------- EVENT_TYPE CHARACTER*32 1 YES EVENT_PARAMETERS CHARACTER*(*) 1 NO COMMENT CHARACTER*80 VARIABLE NO TABLE2: ========== Column name Data type Size Indexed? ----------- --------- ---- -------- EVENT_TYPE CHARACTER*32 1 YES EVENT_PARAMETERS CHARACTER*80 1 NO COMMENT CHARACTER*80 VARIABLE NO COMMAND CHARACTER*80 1 YESThen the following queries are semantically invalid:
SELECT EVENT_PARAMETERS FROM TABLE1 WHERE EVENT_DURATION = 7.0 {No column called EVENT_DURATION is present in a loaded EK} SELECT COMMENT FROM TABLE2 WHERE COMMENT EQ "N/A" {The COMMENT column does not have size 1 and therefore cannot be referenced in a query} Sequence EK Files
Segments
Each segment contains data belonging to one EK table. A sequence EK file may contain multiple segments for one or more distinct tables. Segments for a table may be distributed across multiple EK files. Spreading data for a table across multiple segments has no affect on query interpretation. However, performance degradation may result if a sequence EK file contains a very large number of segments. The comment area
The contents of the comment area must be printable text. The comment area is line-oriented; text inserted into the comment area can be retrieved with the original line breaks preserved. It is recommended that text to be inserted into the comment area have no lines exceeding 80 characters in length. See the section ``Sequence EK tools'' for information on the SPICE Toolkit utilities that access the comment area. Sequence EK tools
INSPEKT
INSPEKT has an extensive, hyper-text style on-line help facility, and also has a detailed user's guide available as a paper document [284]. COMMNT
TOXFR and TOBIN
SPACIT
Reading sequence EKsLoading and unloading sequence EKs
furnsh_c ( <fname> ); {Load SPICE kernel}A limited number of EK files may be loaded at any one time. The current maximum limit is 20 files. The inverse routine corresponding to furnsh_c is unload_c. unload_c removes a loaded kernel from the CSPICE system: the file is closed, and data structures referring to the file are updated to reflect the absence of the file. See [218] for further information on furnsh_c and unload_c. Before queries may be processed, any supplementary kernels required for time conversion should be loaded. To enable use of UTC times in queries, a leapseconds kernel is required. To enable use of SCLK values in queries, an SCLK kernel for the appropriate spacecraft clock must be loaded. All of the EK files loaded at any one time must have consistent table attributes: any two tables having the same name must have the same attributes, even if the tables belong to different files. Unlike the SPK subsystem, the EK subsystem supports no prioritization scheme for loaded kernels: no kernel supersedes another. Rather, all rows of all loaded EKs are considered during query processing. Query-and-fetch interface
Issuing queries
ekfind_c ( <query>, <lenout>, &nmrows, &error, errmsg ); {Find rows that satisfy query}With the arguments:
Fetching data from matching rows
The EK fetch functions return one column entry element at a time, so it is not necessary to know in advance the size of the column entry. To fetch data from a character column, use
ekgc_c ( <selidx>, <row>, <elment>, <lenout>, {Get character cdata, &null, &found ); data}With the arguments:
ekgd_c ( <selidx>, <row>, <elment>, &ddata, &null, &found ); {Get d.p. data}The arguments have the same meanings as the corresponding arguments of ekgc_c, except that `ddata' represents a double precision number. To fetch integer column entry elements, use ekgi_c:
ekgi_c ( <selidx>, <row>, <elment>, &idata, &null, &found ); {Get integer data}The arguments have the same meanings as the corresponding arguments of ekgc_c, except that `idata' represents an integer. Query support utilities
nelts = eknelt_c ( <selidx>, <row> ); {Get number of elements}With the arguments:
Some more complex EK applications may require the ability to fetch results from an arbitrary query. In order to do this, an application must be able to determine at run time the names and data types of the SELECT columns. If an application needs to unambiguously identify the columns, the names of the tables to which the columns belong are needed as well. Applications need not analyze a query to determine the fully qualified names and attributes of the SELECT columns---the EK subsystem provides the function ekpsel_c to do this job. Note: in the discussion below, there are references to substrings in the SELECT clause as ``expressions.'' Currently, the only supported expressions in the SELECT clause are column names. However, ekpsel_c has been designed to support possible query language enhancements, such as specification of general expressions in the SELECT clause. Calls to ekpsel_c are made as shown:
ekpsel_c ( <query>, <msglen>, <tablen>, <collen>, n, xbegs, xends, xtypes, xclass, tabs, cols, error, errmsg ); { Parse SELECT clause }With the arguments:
Record-oriented reader interface
Opening files for record-oriented reading
ekopr_c ( <fname>, &handle ); {EK, open for read}If the EK to be read is to be queried, then the EK should be loaded using furnsh_c.
furnsh_c ( <fname> ); {Load SPICE kernel}The file's handle may be obtained using a call to the CSPICE function kinfo_c. Column entry readers
ekrcec_c ( <handle>, <segno>, <recno>, <column>, {read character <lenout> &nvals, cvals, isnull ); column entry}With the arguments:
ekrced_c ( <handle>, <segno>, <recno>, <column>, {read d.p. &nvals, dvals, isnull ); column entry}The arguments have the same meanings as the corresponding arguments of ekrced_c, except that `dvals' represents a double precision array. To read an integer column entry, call ekrcei_c:
ekrcei_c ( <handle>, <segno>, <recno>, <column>, {read integer &nvals, ivals, isnull ); column entry}The arguments have the same meanings as the corresponding arguments of ekrcec_c, except that `ivals' represents an integer array. Informational functionsSummarizing EK files
The number of segments in an EK is found by calling eknseg_c:
n = eknseg_c ( <handle> ); {Return number of segments}The summary of the segment at ordinal position `segno' is returned by ekssum_c:
ekssum_c ( <handle>, <segno>, &segsum ); {Summarize segment}With the arguments:
Summarizing loaded tables
The number of loaded tables may be found by calling ekntab_c:
ekntab_c ( &n ); {Return number of loaded tables}The name of the nth loaded table may be found by calling ektnam_c:
ektnam_c ( <n>, <lenout>, table ); {Return table name}The number of columns in a specified, loaded table may be found by calling ekccnt_c:
ekccnt_c ( <table>, &ccount ); {Return column count}The name and attributes of the column having a specified ordinal position within a specified, loaded table may be found by calling ekcii_c:
ekcii_c ( <table>, <cindex>, <lenout>, column, &attdsc ); {Return attributes of column specified by index}With the arguments:
Writing sequence EKsIntroduction
The basic sequence of operations by which a new sequence EK is created is:
An existing, closed sequence EK may be opened for write access, at which point all operations valid for a new sequence EK may be performed on the file. The comment area of a sequence EK may be written to when the file is open for write access. Opening a sequence EK for writingBeginning a new sequence EK
ekopn_c ( <fname>, <ifname>, <ncomch>, &handle ); {Open new EK}With the arguments:
Opening an existing sequence EK for writing
ekopw_c ( <fname>, &handle ); {Open EK for writing}The arguments of ekopw_c have the same meanings as the corresponding arguments of ekopn_c. Choosing a writing method
Record-oriented writing allows records to be added to a segment one at a time; this approach simplifies creating records from a streaming data source. Records may be added to a segment in arbitrary order. Also, it is possible to build multiple segments simultaneously using the record-oriented writers. The significant limitation of the record-oriented approach is that it is slow, particularly if the segment being written contains indexed columns. When execution speed is critical, it may be advisable to use the ``fast writers.'' These routines can create a segment as much as 100 times faster than their record-oriented counterparts. However, the fast writers require all of the segment's data to be staged before the segment is written. Below, we discuss aspects of segment creation common to both writing approaches. See the sections ``Using the record-oriented sequence EK writers'' and ``Using the fast writers'' below for specifics on how to implement either approach. Specifying segment attributes
Table and column names
{A-Z, a-z, 0-9, $, _}Case is not significant. Table names must not exceed SPICE_EK_TNAMSZ (see header file SpiceEK.h) characters in length. Column names must not exceed SPICE_EK_CNAMSZ (see header file SpiceEK.h) characters in length. Column declarations
Column declarations are strings that contain ``keyword=value'' assignments that define the attributes of the columns to which they apply. The column attributes defined by a column declaration are:
DATATYPE SIZE <is the column indexed?> <does the column allow null values?>When a segment is started using ekbseg_c or ekifld_c, an array of column declarations must be supplied as an input. The form of a column declaration string is a list of ``keyword=value'' assignments, delimited by commas, as shown:
"DATATYPE = <type>, " "SIZE = <size>, " "INDEXED = <boolean>, " "NULLS_OK = <boolean>"For example, an indexed, scalar, integer column that does not allow null values would have the declaration
"DATATYPE = INTEGER, " "SIZE = 1, " "INDEXED = TRUE, " "NULLS_OK = FALSE"Commas are required to separate the assignments within declarations; white space is optional; case is not significant. The order in which the attribute keywords are listed in the declaration is not significant. Data type specifications are required for each column. Each column entry is effectively an array, each element of which has the declared data type. The SIZE keyword indicates how many elements are in each entry of the column. Note that only scalar-valued columns (those for which SIZE = 1) may be referenced in query constraints. A size assignment has the syntax
SIZE = <integer>or
SIZE = VARIABLEThe size value defaults to 1 if omitted. The DATATYPE keyword defines the data type of column entries. The DATATYPE assignment syntax has any of the forms
DATATYPE = CHARACTER*(<length>) DATATYPE = CHARACTER*(*) DATATYPE = DOUBLE PRECISION DATATYPE = INTEGER DATATYPE = TIMEAs the datatype declaration syntax suggests, character strings may have fixed or variable length. For example, a fixed-length string of 80 characters is indicated by the declaration
DATATYPE = CHARACTER*(80)while a variable-length string is indicated by an asterisk:
DATATYPE = CHARACTER*(*)Variable-length strings have a practical length limit of 1024 characters: the sequence EK writers allow one to write a scalar string of any length, but the sequence EK query functions will truncate a string whose length exceeds this limit. Variable-length strings are allowed only in scalar character columns. Optionally, scalar-valued columns may be indexed. Indexing can greatly speed up the processing of some queries, because indexing allows data to be found by a binary, rather than linear, search. Each index increases the size of the sequence EK file by an amount greater than or equal to the space occupied by two integers times the number of rows in the affected table, so for potentially large sequence EK files, the issue of whether or not to index a column deserves some consideration. To create an index for a column, use the assignment
INDEXED = TRUEBy default, columns are not indexed. Optionally, any column can allow null values; this is indicated by the assignment
NULLS_OK = TRUEin the column declaration. By default, null values are not allowed in column entries. Consistency of schemas
The sequence EK writer functions don't diagnose segment schema inconsistencies (to do so would be cumbersome at best, since inconsistencies could occur in separate files). However, loading into the sequence EK query system segments with identical table names but inconsistent column declarations will result in an error diagnosis. Using the record-oriented sequence EK writersBeginning a segment
ekbseg_c ( <handle>, <tabnam>, <ncols>, <cnmlen>, {Begin <cnames>, <declen>, <decls>, &segno ); segment}The inputs to ekbseg_c are described below:
Adding records to a segment
A new segment is prepared for record-oriented writing using a call to ekbseg_c (see ``Starting a new segment'' above). Next, records are added to the segment. Records may be appended or may be inserted into the segment. To append a new, empty record to a segment, use ekappr_c:
ekappr_c ( <handle>, <segno>, &recno ); {Append record}With the arguments:
ekinsr_c ( <handle>, <segno>, <recno> ); {Insert record}The arguments are the same as those of ekappr_c, except that here `recno' is an input. `recno' is the desired ordinal position of the new record: `recno' must be in the range
0 : nrecwhere `nrec' is the number of records already in the segment. Each new record starts out empty. The column entries in the record are filled in one-by-one using calls to the ``add column entry'' functions ekacec_c, ekaced_c, and ekacei_c. The column entries of a record may be written in any order. Character column entries are written by ekacec_c:
ekacec_c ( <handle>, <segno>, <recno>, <column>, {Add character <nvals>, <vallen>, <cvals>, <isnull> ); column entry}With the arguments:
ekaced_c ( <handle>, <segno>, <recno>, <column>, {Add d.p. <nvals>, <dvals>, <isnull> ); column entry}The arguments have the same meanings as the corresponding arguments of ekacec_c, except that `dvals' represents a double precision array. Values of type TIME are also added using ekaced_c. When a column contains TIME values (as indicated by its declared data type), the values are stored as ephemeris seconds past J2000 TDB. When starting with UTC or SCLK time values, the CSPICE conversion routines str2et_c or scs2e_c may be used to obtain equivalent double precision TDB values. See the TIME.REQ or SCLK.REQ Required Reading for details. Integer column entries are written by ekacei_c:
ekacei_c ( <handle>, <segno>, <recno>, <column>, {Add integer <nvals>, <ivals>, <isnull> ); column entry}The arguments have the same meanings as the corresponding arguments of ekacec_c, except that `ivals' represents an integer array. A record must have all of its column entries written in order to be valid: column entries do not have default values. No action is required to ``finish`` a segment created by the record-oriented writers, although ekcls_c must be called to close the file when all segments have been written. Using the fast writers
The fast write approach involves creating one new segment at a time. Segments are constructed one column at a time: each column is added to a segment in one shot. In order to add a segment to a sequence EK, the sequence EK must be open for write access. New sequence EK files are opened by calling ekopn_c; existing sequence EKs are opened for writing by calling ekopw_c. The sequence of operations required to create a segment using the fast write functions is:
Initiating a fast write
ekifld_c ( <handle>, <tabnam>, <ncols>, <nrows>, {Initiate <cnmlen>, <cnames>, <declen>, <decls>, fast write } &segno, rcptrs );The inputs to ekifld_c are described below.
Adding columns to the segment
To add a character column to a segment, call ekaclc_c:
ekaclc_c ( <handle>, <segno>, <column>, <vallen>, {Add <cvals>, <entszs>, <nlflgs>, <rcptrs>, character wkindx ); column}The inputs to ekaclc_c are described below.
ekacld_c ( <handle>, <segno>, <column>, <dvals>, {Add d.p. <entszs>, <nlflgs>, <rcptrs>, wkindx ); column}The arguments have the same meanings as the corresponding arguments of ekaclc_c, except that `dvals' represents a double precision array. Values of type TIME are also added using ekacld_c. When a column contains TIME values (as indicated by its declared data type), the values are stored as ephemeris seconds past J2000 TDB. When starting with UTC or SCLK time values, the CSPICE conversion routines str2et_c or scs2e_c may be used to obtain equivalent double precision TDB values. See the TIME.REQ or SCLK.REQ Required Reading for details. To add an integer column to a segment, call ekacli_c:
ekacli_c ( <handle>, <segno>, <column>, <ivals>, {Add integer <entszs>, <nlflgs>, <rcptrs>, wkindx ); column}The arguments have the same meanings as the corresponding arguments of ekaclc_c, except that `ivals' represents an integer array. Completing a fast write
ekffld_c ( <handle>, <segno>, <rcptrs> ) {Finish fast write}The meanings of the arguments of ekffld_c are identical to those of the same names belonging to ekifld_c. Calling ekffld_c is an essential step; the segment will not be structurally valid until this call has been made. Once the fast write operation has been completed, the segment may be modified using the record-oriented writers. Restrictions
Record-oriented append, insert, and delete operations are not supported for a segment in the process of being constructed by the fast writers. Updating or reading column entries in the middle of a fast write is also not supported. Fast write operations may not be interleaved with query-and-fetch operations: an application may not start a fast write, issue a query, then continue the fast write, or vice versa. Only one segment can be created at a time using the fast writers. One cannot extend an existing segment using the fast write functions. However, a segment created using the fast writers, once completed using a call to ekffld_c, may be modified using the record-oriented write, update, or delete functions. Updating an existing sequence EK
Adding records is done using the record-oriented writers, which are described above. Column entries may be updated using the functions ekucec_c, ekuced_c, and ekucei_c, which operate on, respectively, character, double precision (or time), and integer column entries. The argument lists of these functions are identical to the record-oriented column entry addition functions of the corresponding data types. When updating a variable-size column entry, it is permissible to replace the original entry with one having a different size. Variable-length strings also can be replaced with strings of different lengths. For columns that allow null values, null entries can be updated with non-null values and vice versa. Records are deleted using a call to ekdelr_c:
ekdelr_c ( <handle>, <segno>, <recno> ); {Delete record}With the arguments:
Closing a sequence EK
The record-oriented read routines may be used to read data from a sequence EK before it has been closed. However, a sequence EK open for write access may not by loaded by furnsh_c and hence is not accessible by the sequence EK query and fetch routines. Appendix A --- Summary of E-kernel FunctionsSummary of mnemonics
Many of the lower-level CSPICE functions have SPICELIB counterparts implemented in Fortran as entry points of another function. The following is a complete list of mnemonics and translations, in alphabetical order.
Header files: SpiceEK.h CSPICE wrappers: ekacec_c ( EK, add character data to column ) ekaced_c ( EK, add d.p. data to column ) ekacei_c ( EK, add integer data to column ) ekaclc_c ( EK, add character column to segment ) ekacld_c ( EK, add double precision column to segment ) ekacli_c ( EK, add integer column to segment ) ekappr_c ( EK, append record onto segment ) ekbseg_c ( EK, start new segment ) ekccnt_c ( EK, column count ) ekcii_c ( EK, column info by index ) ekcls_c ( EK, close file ) ekdelr_c ( EK, delete record from segment ) ekffld_c ( EK, finish fast write ) ekfind_c ( EK, find data ) ekgc_c ( EK, get event data, character ) ekgd_c ( EK, get event data, double precision ) ekgi_c ( EK, get event data, integer ) ekifld_c ( EK, initialize segment for fast write ) ekinsr_c ( EK, insert record into segment ) eklef_c ( EK, load event file ) eknelt_c ( EK, get number of elements in column entry ) eknseg_c ( EK, number of segments in file ) ekntab_c ( EK, return number of loaded tables ) ekopn_c ( EK, open new file ) ekopr_c ( EK, open file for reading ) ekops_c ( EK, open scratch file ) ekopw_c ( EK, open file for writing ) ekpsel_c ( EK, parse SELECT clause ) ekrcec_c ( EK, read column entry element, character ) ekrced_c ( EK, read column entry element, d.p. ) ekrcei_c ( EK, read column entry element, integer ) ekssum_c ( EK, return segment summary ) ektnam_c ( EK, return name of loaded table ) ekucec_c ( EK, update character column entry ) ekuced_c ( EK, update d.p column entry ) ekucei_c ( EK, update integer column entry ) ekuef_c ( EK, unload event file ) Low-level routines converted by f2c: eksrch_ ( EK, search for events ) Summary of Calling Sequences
Load files for query access, unload files:
furnsh_c ( fname ) unload_c ( fname )Open files for record-oriented reading or writing, close files:
dafllc_ ( &handle ) ekcls_c ( handle ) ekopn_c ( fname, ifname, ncomch, &handle ) ekopr_c ( fname, &handle ) ekops_c ( &handle ) ekopw_c ( fname, &handle )Obtain summaries of sequence EK segments:
eknseg_c ( handle ) ekssum_c ( handle, segno, &segsum )Obtain summaries of loaded tables:
ekccnt_c ( table, &ccount ) ekcii_c ( table, cindex, lenout, column, &attdsc ) ekntab_c ( &n ) ektnam_c ( n, lenout, table )Query and fetch:
ekfind_c ( query, lenout, &nmrows, &error, errmsg ) ekgc_c ( selidx, row, elment, lenout, cdata, &null, &found ) ekgd_c ( selidx, row, elment, ddata, &null, &found ) ekgi_c ( selidx, row, elment, idata, &null, &found ) eknelt_c ( selidx, row ) ekpsel_c ( query, msglen, tablen, collen, &n, xbegs, xends, xtypes, xclass, tabs, cols, &error, errmsg )Record-oriented read:
ekrcec_c ( handle, segno, recno, column, lenout, &nvals, cvals &isnull ) ekrced_c ( handle, segno, recno, column, &nvals, dvals, &isnull ) ekrcei_c ( handle, segno, recno, column, &nvals, ivals, &isnull )Fast write:
ekifld_c ( handle, tabnam, ncols, nrows, cnmlen, cnames, declen, decls, &segno, rcptrs ) ekaclc_c ( handle, segno, column, vallen, cvals, entszs, nlflgs, rcptrs, wkindx ) ekacld_c ( handle, segno, column, dvals, entszs, nlflgs, rcptrs, wkindx ) ekacli_c ( handle, segno, column, ivals, entszs, nlflgs, rcptrs, wkindx ) ekffld_c ( handle, segno, rcptrs )Begin segment for record-oriented write:
ekbseg_c ( handle, tabnam, ncols, cnmlen, cnames, declen, decls, &segno )Insert, append, or delete records:
ekappr_c ( handle, segno, &recno ) ekdelr_c ( handle, segno, recno ) ekinsr_c ( handle, segno, &recno )Record-oriented write and update:
ekacec_c ( handle, segno, recno, column, nvals, vallen, cvals, isnull ) ekaced_c ( handle, segno, recno, column, nvals, dvals, isnull ) ekacei_c ( handle, segno, recno, column, nvals, ivals, isnull ) ekucec_c ( handle, segno, recno, column, nvals, vallen, cvals, isnull ) ekuced_c ( handle, segno, recno, column, nvals, dvals, isnull ) ekucei_c ( handle, segno, recno, column, nvals, ivals, isnull ) RevisionsMarch 23, 2016 NJB (JPL)
February 24, 2010 EDW (JPL)
April 1, 2009
Feb. 06, 2002
Jan. 15, 2002
|