Introduction
GNU PSPP is a tool for statistical analysis of sampled data. It reads the data, analyzes the data according to commands provided, and writes the results to a listing file, to the standard output or to a window of the graphical display.
The language accepted by PSPP is similar to those accepted by SPSS statistical products. The details of PSPP's language are given later in this manual.
PSPP produces tables and charts as output, which it can produce in several formats; currently, ASCII, PostScript, PDF, HTML, DocBook and TeX are supported.
PSPP is a work in progress. The authors hope to fully support all features in the products that PSPP replaces, eventually. The authors welcome questions, comments, donations, and code submissions.
License
PSPP is not in the public domain. It is copyrighted and there are restrictions on its distribution, but these restrictions are designed to permit everything that a good cooperating citizen would want to do. What is not allowed is to try to prevent others from further sharing any version of this program that they might get from you.
Specifically, we want to make sure that you have the right to give away copies of PSPP, that you receive source code or else can get it if you want it, that you can change these programs or use pieces of them in new free programs, and that you know you can do these things.
To make sure that everyone has such rights, we have to forbid you to deprive anyone else of these rights. For example, if you distribute copies of PSPP, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must tell them their rights.
Also, for our own protection, we must make certain that everyone finds out that there is no warranty for PSPP. If these programs are modified by someone else and passed on, we want their recipients to know that what they have is not what we distributed, so that any problems introduced by others will not reflect on our reputation.
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.
PSPP is licensed under the GNU General Public License, version 3 or later. This manual is licensed under the GNU Free Documentation License, version 1.3 or later; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
Running pspp
This chapter describes how to run pspp, PSPP's main command-line
user interface. The pspp program has a number of commands, each of
which is documented in its own section.
To see a list of commands, run pspp --help. For help with a
particular command, run pspp <command> --help.
Converting file formats with pspp convert
The pspp convert command reads SPSS data and viewer files and writes
them out in other formats. The basic syntax is:
pspp convert <INPUT> [OUTPUT]
which reads an input file from <INPUT> and writes a copy of it to
[OUTPUT]. If [OUTPUT] is omitted, output is written to the
terminal.
The following sections describe how pspp convert works with
different kinds of files.
Converting .sav, .por, and .sys Data Files
pspp convert can convert SPSS system files (.sav), SPSS portable
files (.por), and SPSS/PC+ system files (.sys) into different
formats.
If an output file is named, then pspp convert tries to guess the
output format based on its extension:
-
.csv
.txt
Comma-separated value. Each value is formatted according to its variable's print format. The first line in the file contains variable names. -
.sav
.sys
SPSS system file.
Without an output file name, the default output format is CSV. Use
-O <output_format> to override the default or to specify the format
for unrecognized extensions.
Converting .spv Viewer Files
pspp convert can convert SPSS viewer files (.spv files) into
multiple different formats.
Options
pspp convert accepts the following general options:
-
-O csv
-O sys
Sets the output format. -
-e <ENCODING>
--encoding=<ENCODING>
Sets the character encoding used to read text strings in the input file. This is not needed for new enough SPSS data files, but older data files do not identify their encoding, and PSPP cannot always guess correctly.<ENCODING>must be one of the labels for encodings in the Encoding Standard. PSPP does not support UTF-16 or EBCDIC encodings in data files.pspp show encodingscan help figure out the correct encoding for a system file. -
-c <MAX_CASES>
--cases=<MAX_CASES>
By default, all cases in the input are copied to the output. Specify this option to limit the number of copied cases. -
-p <PASSWORD>
--password=<PASSWORD>
Specifies the password for reading an encrypted SPSS system file.pspp convertreads, but does not write, encrypted system files.⚠️ The password (and other command-line options) may be visible to other users on multiuser systems.
System File Output Options
These options only affect output to SPSS system files.
-
--unicode
Writes system file output with Unicode (UTF-8) encoding. If the input was not already in Unicode, then this causes string variables to be tripled in width. -
--compression <COMPRESSION>
Writes data in the system file with the specified format of compression:-
simple: A simple form of compression that saves space writing small integer values and string segments that are all spaces. All versions of SPSS support simple compression. -
zlib: More advanced compression that saves space in more general cases. Only SPSS 21 and later can read files written withzlibcompression.
-
CSV Output Options
These options only affect output to CSV files.
-
--no-var-names
By default,pspp convertwrites the variable names as the first line of output. With this option,pspp convertomits this line. -
--recode
By default,pspp convertwrites user-missing values to CSV output files as their regular values. With this option,pspp convertrecodes them to system-missing values (which are written as a single space). -
--labels
By default,pspp convertwrites variables' values to CSV output files. With this option,pspp convertwrites value labels. -
--print-formats
By default,pspp convertwrites numeric variables as plain numbers. This option makespspp converthonor variables' print formats. -
--decimal=DECIMAL
This option sets the character used as a decimal point in output. The default is.. Only ASCII characters may be used. -
--delimiter=DELIMITER
This option sets the character used to separate fields in output. The default is,, unless the decimal point is,, in which case;is used. Only ASCII characters may be used. -
--qualifier=QUALIFIER
The option sets the character used to quote fields that contain the delimiter. The default is". Only ASCII characters may be used.
Inspecting System (.sav) Files with pspp show
The pspp show command reads an SPSS "system file" or data file,
which usually has a .sav extension, and produces a report. The
basic syntax is:
pspp show <MODE> <INPUT> [OUTPUT]
where <MODE> is a mode of operation (see below), <INPUT> is the
SPSS data file to read, and [OUTPUT] is the output file name. If
[OUTPUT] is omitted, output is written to the terminal.
The following <MODE>s are available:
-
identify: Outputs a line of text to stdout that identifies the basic kind of system file. -
dictionary: Outputs the file dictionary in detail, including variables, value labels, attributes, documents, and so on. With--data, also outputs cases from the system file.This can be useful as an alternative to PSPP syntax commands such as
SYSFILE INFOorDISPLAY DICTIONARY.pspp convertis a better way to convert a system file to another format. -
encodings: Analyzes text data in the system file dictionary and (with--data) cases and produces a report that can help the user to figure out what character encoding the file uses.This is useful for old system files that don't identify their own encodings.
-
raw: Outputs the raw structure of the system file dictionary and (with--data) cases. This command does not assume a particular character encoding for the system file, which means that some of the dictionary can't be printed in detail, only in summary.This is useful for debugging how PSPP reads system files and for investigating cases of system file corruption, especially when the character encoding is unknown or uncertain.
This command is most useful with some knowledge of the system file format.
-
decoded: Outputs the raw structure of the system file dictionary and (with--data) cases. Versusraw, this command does decode the dictionary and data with a particular character encoding, which allows it to fully interpret system file records.This is useful for debugging how PSPP reads system files and for investigating cases of system file corruption.
This command is most useful with some knowledge of the system file format.
Options
The following options affect how pspp show reads <INPUT>:
-
--encoding <ENCODING>
For modesdecodedanddictionary, this reads the input file using the specified<ENCODING>, overriding the default.<ENCODING>must be one of the labels for encodings in the Encoding Standard. PSPP does not support UTF-16 or EBCDIC encodings in data files.pspp show encodingscan help figure out the correct encoding for a system file. -
--data [<MAX_CASES>]
For modesraw,dictionary, andencodings, this instructspspp showto read cases from the file. If<MAX_CASES>is given, then that sets a limit on the number of cases to read. Without this option, PSPP will not read any cases.
The following options affect how pspp show writes its output:
-
-f <FORMAT>
--format <FORMAT>
Specifies the format to use for output.<FORMAT>may be one of the following:json: JSON using indentation and spaces for easy human consumption.ndjson: Newline-delimited JSON.output: Pivot tables with the PSPP output engine. Use-ofor additional configuration.discard: Do not produce any output.
When these options are not used, the default output format is chosen based on the
[OUTPUT]extension. If[OUTPUT]is not specified, then output defaults to JSON. -
-o <OUTPUT_OPTIONS>
Adds<OUTPUT_OPTIONS>to the output engine configuration.
Inspecting Portable (.por) Files with pspp show-por
The pspp show-por command reads an SPSS "portable file",
which usually has a .por extension, and produces a report. The
basic syntax is:
pspp show-por <MODE> <INPUT> [OUTPUT]
where <MODE> is a mode of operation (see below), <INPUT> is the
SPSS portable file to read, and [OUTPUT] is the output file name.
If [OUTPUT] is omitted, output is written to the terminal.
The portable file format is mostly obsolete. The "system file" or
.savformat should be used for writing new data files. Usepspp showto inspect.savfiles.
The following <MODE>s are available:
-
dictionary: Outputs the file dictionary in detail, including variables, value labels, documents, and so on. With--data, also outputs cases from the system file.This can be useful as an alternative to PSPP syntax commands such as
DISPLAY DICTIONARY.pspp convertis a better way to convert a portable file to another format. -
metadata: Outputs portable file metadata not included in the dictionary:-
The creation date and time declared inside the file (not in the file system).
-
The name of the product and subproduct that wrote the file, if present.
-
The author of the file, if present. This is usually the name of the organization that licensed the product that wrote the file.
-
The character set translation table embedded in the file, as an array with 256 elements, one for each possible value of a byte in the file. Each array element gives the byte value as a 2-digit hexadecimal number paired with the translation table's entry for that byte. Since the file can technically be in any encoding (although the corpus universally uses extended ASCII), the entry is given as a character interpreted in two character sets: windows-1252 and code page 437, in that order. (If the two character sets agree on the code point, then it is only given once.)
For example, consider a portable's file translation table at offset 0x9e, which in the portable character set is
±. Suppose it has value 0xb1, which is±in windows-1252 and▒in code page 437. Then that array element would be["9e", "±", "▒"].
This command is most useful with some knowledge of the portable file format.
-
-
histogram: Reports on the usage of characters in the portable file. Produces output in the form of an array for each possible value of a byte in the file. Each array element gives the byte value, the byte's character, and the number of times that the byte appears in the file. A given byte is omitted from the table if it does not appear in the file at all, or if the translation table leaves it unmapped. It is also omitted if the byte's character is the ISO-8859-1 encoding of the byte (for example, if byte 0x41 representsA, which isAin ISO-8859-1).This command is most useful with some knowledge of the portable file format.
Options
The following options affect how pspp show-por reads <INPUT>:
--data [<MAX_CASES>]
For modedictionary, andencodings, this instructspspp show-porto read cases from the file. If<MAX_CASES>is given, then that sets a limit on the number of cases to read. Without this option, PSPP will not read any cases.
The following options affect how pspp show-por writes its output:
-
-f <FORMAT>
--format <FORMAT>
Specifies the format to use for output.<FORMAT>may be one of the following:json: JSON using indentation and spaces for easy human consumption.ndjson: Newline-delimited JSON.output: Pivot tables with the PSPP output engine. Use-ofor additional configuration.discard: Do not produce any output.
When these options are not used, the default output format is chosen based on the
[OUTPUT]extension. If[OUTPUT]is not specified, then output defaults to JSON. -
-o <OUTPUT_OPTIONS>
Adds<OUTPUT_OPTIONS>to the output engine configuration.
Inspecting SPSS/PC+ Files
The pspp show-pc command reads an SPSS/PC+ system file which
usually has a .sys extension, and produces a report.
SPSS/PC+ has been obsolete since the 1990s, and its file format is also obsolete and rarely encountered. Use
pspp showto inspect modern SPSS system files.
The basic syntax is:
pspp show-pc <MODE> <INPUT> [OUTPUT]
where <MODE> is a mode of operation (see below), <INPUT> is the
SPSS/PC+ file to read, and [OUTPUT] is the output file name. If
[OUTPUT] is omitted, output is written to the terminal.
The following <MODE>s are available:
-
dictionary: Outputs the file dictionary in detail, including variables, value labels, and so on. With--data, also outputs cases from the system file.This can be useful as an alternative to PSPP syntax commands such as
DISPLAY DICTIONARY.pspp convertis a better way to convert an SPSS/PC+ file to another format. -
metadata: Outputs metadata not included in the dictionary:-
The creation date and time declared inside the file (not in the file system).
-
The name of the product family and product that wrote the file, if present.
-
The file name embedded inside the file, if one is present.
-
Whether the file is bytecode-compressed.
-
The number of cases in the file.
-
Options
The following options affect how pspp show-pc reads <INPUT>:
--data [<MAX_CASES>]
For modedictionary, andencodings, this instructspspp show-pcto read cases from the file. If<MAX_CASES>is given, then that sets a limit on the number of cases to read. Without this option, PSPP will not read any cases.
The following options affect how pspp show-pc writes its output:
-
-f <FORMAT>
--format <FORMAT>
Specifies the format to use for output.<FORMAT>may be one of the following:json: JSON using indentation and spaces for easy human consumption.ndjson: Newline-delimited JSON.output: Pivot tables with the PSPP output engine. Use-ofor additional configuration.discard: Do not produce any output.
When these options are not used, the default output format is chosen based on the
[OUTPUT]extension. If[OUTPUT]is not specified, then output defaults to JSON. -
-o <OUTPUT_OPTIONS>
Adds<OUTPUT_OPTIONS>to the output engine configuration.
Inspecting SPSS Viewer Files
The pspp show-spv command reads SPSS Viewer (SPV) files, which
usually have .sav extension, and produces a report. The basic
syntax is:
pspp show-spv <MODE> <INPUT> [OUTPUT]
where <MODE> is a mode of operation (see below) and <INPUT> is the
SPV file to read, and [OUTPUT] is the output file name. If
[OUTPUT] is omitted, output is written to the terminal.
The following <MODE>s are accepted:
-
dir: Outputs a table of contents for the SPV file, listing every selected object, which by default is every object except for hidden ones.The following additional option for
diris intended mainly for use by PSPP developers:--member-names: Also show the names of the ZIP file members associated with each object.
-
get-table-look: Extracts the TableLook from the first table in the selected objects and outputs it in TableLook XML format. The output file should have an.sttextension.Use
-for<INPUT>to instead write the default TableLook. -
convert-table-look: Reads an.sttor.tloTableLook file as<INPUT>and outputs it in TableLook XML format. The output file should have an.sttextension.This is useful for converting a TableLook
.tlofile from SPSS 15 or earlier into the newer.sttformat.
Input Selection Options
Commands that read an SPV file operate, by default, on all of the objects in the file, except for objects that are not visible in the output viewer window. The user may specify these options to select a subset of the input objects. When multiple options are used, only objects that satisfy all of them are selected:
-
--select=[^]CLASS...
Include only objects of the givenCLASS; with leading^, include only objects not in the class. Use commas to separate multiple classes. The supported classes are:chartsheadingslogsmodelstablestextstreeswarningsoutlineheaderspagetitlenotesunknownother
-
--commands=[^]COMMAND...
--subtypes=[^]SUBTYPE...
--labels=[^]LABEL...
Include only objects with the specifiedCOMMAND,SUBTYPE, orLABEL. With a leading^, include only the objects that do not match. Multiple values may be specified separated by commas. An asterisk at the end of a value acts as a wildcard.The
--commandoption matches command identifiers, case insensitively. All of the objects produced by a single command use the same, unique command identifier. Command identifiers are always in English regardless of the language used for output. They often differ from the command name in PSPP syntax. Use thepspp-outputprogram'sdircommand to print command identifiers in particular output.The
--subtypesoption matches particular tables within a command, case insensitively. Subtypes are not necessarily unique: two commands that produce similar output tables may use the same subtype. Only tables have subtypes, so specifying--subtypeswill exclude other kinds of objects. Subtypes are always in English anddirwill print them.The
--labelsoption matches the labels in table output (that is, the table titles). Labels are affected by the output language, variable names and labels, split file settings, and other factors. -
--nth-commands=N...
Include only objects from theNth command that matches--command(or theNth command overall if--commandis not specified), whereNis 1 for the first command, 2 for the second, and so on. -
--instances=INSTANCE...
Include the specifiedINSTANCEof an object that matches the other criteria within a single command.INSTANCEmay be a number (1 for the first instance, 2 for the second, and so on) orlastfor the last instance. -
--show-hidden
Include hidden output objects in the output. By default, they are excluded. -
--or
Separates two sets of selection options. Objects selected by either set of options are included in the output.
The following additional input selection options are intended mainly for use by PSPP developers:
-
--errors
Include only objects that cause an error when read. With theconvertcommand, this is most useful in conjunction with the--forceoption. -
--members=MEMBER...
Include only the objects that include a listed Zip fileMEMBER. More than one name may be included, comma-separated. The members in an SPV file may be listed with thedircommand by adding the--show-membersoption or with thezipinfoprogram included with many operating systems. Error messages thatpspp-outputprints when it reads SPV files also often include member names.
Decrypting SPSS files with pspp decrypt
The pspp decrypt command reads an encrypted SPSS file and writes out
an equivalent plaintext file. The basic syntax is:
pspp decrypt <INPUT> <OUTPUT>
which reads an encrypted SPSS data, viewer, or syntax file <INPUT>,
decrypts it, and writes the decrypted version to <OUTPUT>.
Other commands, such as pspp convert, can also
read encrypted files directly.
PSPP does not support writing encrypted files, only reading them.
⚠️ Warning: The SPSS encryption format is insecure: when the password is unknown, it is much cheaper and faster to decrypt a file encrypted this way than if a well designed alternative were used.
Options
pspp decrypt accepts the following options:
-
-p <PASSWORD>
--password <PASSWORD>
Specifies the password for reading the encrypted input file. Without this option, PSPP will interactively prompt for the password.⚠️ The password (and other command-line options) may be visible to other users on multiuser systems.
Output Driver Configuration
PSPP can write output in several formats. This section documents the supported formats and how they can be configured.
Text Output (.txt and .text)
PSPP can produce plain text output, drawing boxes using ASCII or Unicode line drawing characters.
Plain text output is encoded in UTF-8.
This driver has the following options:
-
width: <columns>
Sets the maximum page width to the specified number of columns. To fit in the given width, output tables columns will be word-wrapped or, if necessary, tables will be broken into multiple chunks. The default is no maximum width. -
boxes: unicode
boxes: ascii
Sets the style used for boxes in the output. The following shows an example of each style:unicode ascii ┌────┬────┐ +----+----+ │ │ │ | | | ├────┼────┤ +----+----+ │ │ │ | | | └────┴────┘ +----+----+Unicode boxes are generally more attractive but they can be harder to work with in some environments. The default is
unicode. -
emphasis: <bool>
If this is set to true, then the output includes bold and underline emphasis with overstriking. This is supported by only some software, mainly on Unix. The default isfalse.
PDF Output (.pdf)
This driver has the following options:
-
page_setup: <PageSetup>
Sets the page size, margins, and other parameters. The following sub-options are available:-
initial_page_number: <number>
The page number to use for the first page of output. The default is 1. -
paper: <size>
Sets the page size.<size>is a quoted string in the form<w>x<h><unit>, e.g.8.5x11inor210x297mm, or the name of a standard paper size, such asletterora4. The default is system- and user-dependent. -
margins: <trbl>
margins: [<tb>, <lr>]
margins: [<t>, <rl>, <b>]
margins: [<t>, <r>, <b>, <l>]
Sets the margins. Each variable is a quoted string with a length and a unit, e.g.10mm. The one-value form sets all margins to the same length; the two-value form sets the top and bottom margins separately from left and right; and so on. The default is0.5in. -
orientation: portrait
orientation: landscape
Controls the output page orientation. The default isportrait. -
object_spacing: <length>
Sets the vertical spacing between output objects, such as tables or text.<length>is a quoted string with a length and a unit, e.g.10mm. The default is12pt, or 1/6 of an inch. -
chart_spacing: as_is
chart_spacing: full_height
chart_spacing: half_height
chart_spacing: quarter_height
Sets the size of charts and graphs in the output. The default,as_is, uses the size specified in the charts themselves. The other possibilities set chart size in terms of the height of the page. -
header: <heading>
footer: <heading>
-
HTML Output (.htm and .html)
Comma-Separated Value Output (.csv)
JSON Output (.json)
SPSS Viewer Output (.spv)
PSPP Language Tutorial
PSPP is a tool for the statistical analysis of sampled data. You can use it to discover patterns in the data, to explain differences in one subset of data in terms of another subset and to find out whether certain beliefs about the data are justified. This chapter does not attempt to introduce the theory behind the statistical analysis, but it shows how such analysis can be performed using PSPP.
This tutorial assumes that you are using PSPP in its interactive mode
from the command line. However, the example commands can also be
typed into a file and executed in a post-hoc mode by typing pspp FILE-NAME at a shell prompt, where FILE-NAME is the name of the
file containing the commands. Alternatively, from the graphical
interface, you can select File → New → Syntax to open a new syntax
window and use the Run menu when a syntax fragment is ready to be
executed. Whichever method you choose, the syntax is identical.
When using the interactive method, PSPP tells you that it's waiting
for your data with a string like PSPP> or data>. In the examples
of this chapter, whenever you see text like this, it indicates the
prompt displayed by PSPP, not something that you should type.
Throughout this chapter reference is made to a number of sample data files. So that you can try the examples for yourself, you should have received these files along with your copy of PSPP.1
Normally these files are installed in the directory
/usr/local/share/pspp/examples. If however your system administrator or operating system vendor has chosen to install them in a different location, you will have to adjust the examples accordingly.
-
These files contain purely fictitious data. They should not be used for research purposes. ↩
Preparation of Data Files
Before analysis can commence, the data must be loaded into PSPP and arranged such that both PSPP and humans can understand what the data represents. There are two aspects of data:
-
The variables—these are the parameters of a quantity which has been measured or estimated in some way. For example height, weight and geographic location are all variables.
-
The observations (also called 'cases') of the variables—each observation represents an instance when the variables were measured or observed.
For example, a data set which has the variables height, weight, and name, might have the observations:
1881 89.2 Ahmed
1192 107.01 Frank
1230 67 Julie
The following sections explain how to define a dataset.
Defining Variables
Variables come in two basic types: "numeric" and "string". Variables such as age, height and satisfaction are numeric, whereas name is a string variable. String variables are best reserved for commentary data to assist the human observer. However they can also be used for nominal or categorical data.
The following example defines two variables, forename and height,
and reads data into them by manual input:
PSPP> data list list /forename (A12) height.
PSPP> begin data.
data> Ahmed 188
data> Bertram 167
data> Catherine 134.231
data> David 109.1
data> end data
PSPP>
There are several things to note about this example.
-
The words
data list listare an example of theDATA LIST. command, which tells PSPP to prepare for reading data. The wordlistintentionally appears twice. The first occurrence is part of theDATA LISTcall, whilst the second tells PSPP that the data is to be read as free format data with one record per line.Usually this manual shows command names and other fixed elements of syntax in upper case, but case doesn't matter in most parts of command syntax. In the tutorial, we usually show them in lowercase because they are easier to type that way.
-
The
/character is important. It marks the start of the list of variables which you wish to define. -
The text
forenameis the name of the first variable, and(A12)says that the variable forename is a string variable and that its maximum length is 12 bytes. The second variable's name is specified by the textheight. Since no format is given, this variable has the default format. Normally the default format expects numeric data, which should be entered in the locale of the operating system. Thus, the example is correct for English locales and other locales which use a period (.) as the decimal separator. However if you are using a system with a locale which uses the comma (,) as the decimal separator, then you should in the subsequent lines substitute.with,. Alternatively, you could explicitly tell PSPP that the height variable is to be read using a period as its decimal separator by appending the textDOT8.3after the wordheight. For more information on data formats, see Input and Output Formats. -
PSPP displays the prompt
PSPP>when it's expecting a command. When it's expecting data, the prompt changes todata>so that you know to enter data and not a command. -
At the end of every command there is a terminating
.which tells PSPP that the end of a command has been encountered. You should not enter.when data is expected (ie. when thedata>prompt is current) since it is appropriate only for terminating commands.You can also terminate a command with a blank line.
Listing the data
Once the data has been entered, you could type
PSPP> list /format=numbered.
to list the data. The optional text /format=numbered requests the
case numbers to be shown along with the data. It should show the
following output:
Data List
┌───────────┬─────────┬──────┐
│Case Number│ forename│height│
├───────────┼─────────┼──────┤
│1 │Ahmed │188.00│
│2 │Bertram │167.00│
│3 │Catherine│134.23│
│4 │David │109.10│
└───────────┴─────────┴──────┘
Note that the numeric variable height is displayed to 2 decimal
places, because the format for that variable is F8.2. For a
complete description of the LIST command, see
LIST.
Reading data from a text file
The previous example showed how to define a set of variables and to
manually enter the data for those variables. Manual entering of data is
tedious work, and often a file containing the data will be have been
previously prepared. Let us assume that you have a file called
mydata.dat containing the ascii encoded data:
Ahmed 188.00
Bertram 167.00
Catherine 134.23
David 109.10
.
.
.
Zachariah 113.02
You can can tell the DATA LIST command to read the data directly
from this file instead of by manual entry, with a command like: PSPP>
data list file='mydata.dat' list /forename (A12) height. Notice
however, that it is still necessary to specify the names of the
variables and their formats, since this information is not contained
in the file. It is also possible to specify the file's character
encoding and other parameters. For full details refer to DATA LIST.
Reading data from a pre-prepared PSPP file
When working with other PSPP users, or users of other software which
uses the PSPP data format, you may be given the data in a pre-prepared
PSPP file. Such files contain not only the data, but the variable
definitions, along with their formats, labels and other meta-data.
Conventionally, these files (sometimes called "system" files) have the
suffix .sav, but that is not mandatory. The following syntax loads a
file called my-file.sav.
PSPP> get file='my-file.sav'.
You will encounter several instances of this in future examples.
Saving data to a PSPP file.
If you want to save your data, along with the variable definitions so
that you or other PSPP users can use it later, you can do this with the
SAVE command.
The following syntax will save the existing data and variables to a
file called my-new-file.sav.
PSPP> save outfile='my-new-file.sav'.
If my-new-file.sav already exists, then it will be overwritten.
Otherwise it will be created.
Reading data from other sources
Sometimes it's useful to be able to read data from comma separated
text, from spreadsheets, databases or other sources. In these
instances you should use the GET DATA
command.
Exiting PSPP
Use the FINISH command to exit PSPP:
PSPP> finish.
Data Screening and Transformation
Once data has been entered, it is often desirable, or even necessary, to transform it in some way before performing analysis upon it. At the very least, it's good practice to check for errors.
Identifying incorrect data
Data from real sources is rarely error free. PSPP has a number of procedures which can be used to help identify data which might be incorrect.
The DESCRIPTIVES command is used
to generate simple linear statistics for a dataset. It is also useful
for identifying potential problems in the data. The example file
physiology.sav contains a number of physiological measurements of a
sample of healthy adults selected at random. However, the data entry
clerk made a number of mistakes when entering the data. The following
example illustrates the use of DESCRIPTIVES to screen this data and
identify the erroneous values:
PSPP> get file='/usr/local/share/pspp/examples/physiology.sav'.
PSPP> descriptives sex, weight, height.
For this example, PSPP produces the following output:
Descriptive Statistics
┌─────────────────────┬──┬───────┬───────┬───────┬───────┐
│ │ N│ Mean │Std Dev│Minimum│Maximum│
├─────────────────────┼──┼───────┼───────┼───────┼───────┤
│Sex of subject │40│ .45│ .50│Male │Female │
│Weight in kilograms │40│ 72.12│ 26.70│ ─55.6│ 92.1│
│Height in millimeters│40│1677.12│ 262.87│ 179│ 1903│
│Valid N (listwise) │40│ │ │ │ │
│Missing N (listwise) │ 0│ │ │ │ │
└─────────────────────┴──┴───────┴───────┴───────┴───────┘
The most interesting column in the output is the minimum value. The weight variable has a minimum value of less than zero, which is clearly erroneous. Similarly, the height variable's minimum value seems to be very low. In fact, it is more than 5 standard deviations from the mean, and is a seemingly bizarre height for an adult person.
We can look deeper into these discrepancies by issuing an additional
EXAMINE command:
PSPP> examine height, weight /statistics=extreme(3).
This command produces the following additional output (in part):
Extreme Values
┌───────────────────────────────┬───────────┬─────┐
│ │Case Number│Value│
├───────────────────────────────┼───────────┼─────┤
│Height in millimeters Highest 1│ 14│ 1903│
│ 2│ 15│ 1884│
│ 3│ 12│ 1802│
│ ──────────┼───────────┼─────┤
│ Lowest 1│ 30│ 179│
│ 2│ 31│ 1598│
│ 3│ 28│ 1601│
├───────────────────────────────┼───────────┼─────┤
│Weight in kilograms Highest 1│ 13│ 92.1│
│ 2│ 5│ 92.1│
│ 3│ 17│ 91.7│
│ ──────────┼───────────┼─────┤
│ Lowest 1│ 38│─55.6│
│ 2│ 39│ 54.5│
│ 3│ 33│ 55.4│
└───────────────────────────────┴───────────┴─────┘
From this new output, you can see that the lowest value of height is 179
(which we suspect to be erroneous), but the second lowest is 1598 which
we know from DESCRIPTIVES is within 1 standard deviation from the
mean. Similarly, the lowest value of weight is negative, but its second
lowest value is plausible. This suggests that the two extreme values
are outliers and probably represent data entry errors.
The output also identifies the case numbers for each extreme value, so we can see that cases 30 and 38 are the ones with the erroneous values.
Dealing with suspicious data
If possible, suspect data should be checked and re-measured. However,
this may not always be feasible, in which case the researcher may
decide to disregard these values. PSPP has a feature for missing
values, whereby data can
assume the special value 'SYSMIS', and will be disregarded in future
analysis. You can set the two suspect values to the SYSMIS value
using the RECODE command.
PSPP> recode height (179 = SYSMIS).
PSPP> recode weight (LOWEST THRU 0 = SYSMIS).
The first command says that for any observation which has a height value of 179, that value should be changed to the SYSMIS value. The second command says that any weight values of zero or less should be changed to SYSMIS. From now on, they will be ignored in analysis.
If you now re-run the DESCRIPTIVES or EXAMINE commands from the
previous section, you will see a data summary with more plausible
parameters. You will also notice that the data summaries indicate the
two missing values.
Inverting negatively coded variables
Data entry errors are not the only reason for wanting to recode data.
The sample file hotel.sav comprises data gathered from a customer
satisfaction survey of clients at a particular hotel. The following
commands load the file and display its variables and associated data:
PSPP> get file='/usr/local/share/pspp/examples/hotel.sav'.
PSPP> display dictionary.
It yields the following output:
Variables
┌────┬────────┬─────────────┬────────────┬─────┬─────┬─────────┬──────┬───────┐
│ │ │ │ Measurement│ │ │ │ Print│ Write │
│Name│Position│ Label │ Level │ Role│Width│Alignment│Format│ Format│
├────┼────────┼─────────────┼────────────┼─────┼─────┼─────────┼──────┼───────┤
│v1 │ 1│I am │Ordinal │Input│ 8│Right │F8.0 │F8.0 │
│ │ │satisfied │ │ │ │ │ │ │
│ │ │with the │ │ │ │ │ │ │
│ │ │level of │ │ │ │ │ │ │
│ │ │service │ │ │ │ │ │ │
│v2 │ 2│The value for│Ordinal │Input│ 8│Right │F8.0 │F8.0 │
│ │ │money was │ │ │ │ │ │ │
│ │ │good │ │ │ │ │ │ │
│v3 │ 3│The staff │Ordinal │Input│ 8│Right │F8.0 │F8.0 │
│ │ │were slow in │ │ │ │ │ │ │
│ │ │responding │ │ │ │ │ │ │
│v4 │ 4│My concerns │Ordinal │Input│ 8│Right │F8.0 │F8.0 │
│ │ │were dealt │ │ │ │ │ │ │
│ │ │with in an │ │ │ │ │ │ │
│ │ │efficient │ │ │ │ │ │ │
│ │ │manner │ │ │ │ │ │ │
│v5 │ 5│There was too│Ordinal │Input│ 8│Right │F8.0 │F8.0 │
│ │ │much noise in│ │ │ │ │ │ │
│ │ │the rooms │ │ │ │ │ │ │
└────┴────────┴─────────────┴────────────┴─────┴─────┴─────────┴──────┴───────┘
Value Labels
┌────────────────────────────────────────────────────┬─────────────────┐
│Variable Value │ Label │
├────────────────────────────────────────────────────┼─────────────────┤
│I am satisfied with the level of service 1│Strongly Disagree│
│ 2│Disagree │
│ 3│No Opinion │
│ 4│Agree │
│ 5│Strongly Agree │
├────────────────────────────────────────────────────┼─────────────────┤
│The value for money was good 1│Strongly Disagree│
│ 2│Disagree │
│ 3│No Opinion │
│ 4│Agree │
│ 5│Strongly Agree │
├────────────────────────────────────────────────────┼─────────────────┤
│The staff were slow in responding 1│Strongly Disagree│
│ 2│Disagree │
│ 3│No Opinion │
│ 4│Agree │
│ 5│Strongly Agree │
├────────────────────────────────────────────────────┼─────────────────┤
│My concerns were dealt with in an efficient manner 1│Strongly Disagree│
│ 2│Disagree │
│ 3│No Opinion │
│ 4│Agree │
│ 5│Strongly Agree │
├────────────────────────────────────────────────────┼─────────────────┤
│There was too much noise in the rooms 1│Strongly Disagree│
│ 2│Disagree │
│ 3│No Opinion │
│ 4│Agree │
│ 5│Strongly Agree │
└────────────────────────────────────────────────────┴─────────────────┘
The output shows that all of the variables v1 through v5 are measured
on a 5 point Likert scale, with 1 meaning "Strongly disagree" and 5
meaning "Strongly agree". However, some of the questions are positively
worded (v1, v2, v4) and others are negatively worded (v3, v5). To
perform meaningful analysis, we need to recode the variables so that
they all measure in the same direction. We could use the RECODE
command, with syntax such as:
recode v3 (1 = 5) (2 = 4) (4 = 2) (5 = 1).
However an easier and more elegant way uses the
COMPUTE command. Since the variables
are Likert variables in the range (1 ... 5), subtracting their value
from 6 has the effect of inverting them:
compute VAR = 6 - VAR.
The following section uses this technique to recode the
variables v3 and v5. After applying COMPUTE for both variables, all
subsequent commands will use the inverted values.
Testing data consistency
A sensible check to perform on survey data is the calculation of
reliability. This gives the statistician some confidence that the
questionnaires have been completed thoughtfully. If you examine the
labels of variables v1, v3 and v4, you will notice that they ask very
similar questions. One would therefore expect the values of these
variables (after recoding) to closely follow one another, and we can
test that with the RELIABILITY
command. The following example shows a PSPP session where the user
recodes negatively scaled variables and then requests reliability
statistics for v1, v3, and v4.
PSPP> get file='/usr/local/share/pspp/examples/hotel.sav'.
PSPP> compute v3 = 6 - v3.
PSPP> compute v5 = 6 - v5.
PSPP> reliability v1, v3, v4.
This yields the following output:
Scale: ANY
Case Processing Summary
┌────────┬──┬───────┐
│Cases │ N│Percent│
├────────┼──┼───────┤
│Valid │17│ 100.0%│
│Excluded│ 0│ .0%│
│Total │17│ 100.0%│
└────────┴──┴───────┘
Reliability Statistics
┌────────────────┬──────────┐
│Cronbach's Alpha│N of Items│
├────────────────┼──────────┤
│ .81│ 3│
└────────────────┴──────────┘
As a rule of thumb, many statisticians consider a value of Cronbach's Alpha of 0.7 or higher to indicate reliable data.
Here, the value is 0.81, which suggests a high degree of reliability among variables v1, v3 and v4, so the data and the recoding that we performed are vindicated.
Testing for normality
Many statistical tests rely upon certain properties of the data. One
common property, upon which many linear tests depend, is that of
normality -- the data must have been drawn from a normal distribution.
It is necessary then to ensure normality before deciding upon the test
procedure to use. One way to do this uses the EXAMINE command.
In the following example, a researcher was examining the failure
rates of equipment produced by an engineering company. The file
repairs.sav contains the mean time between failures (mtbf) of some
items of equipment subject to the study. Before performing linear
analysis on the data, the researcher wanted to ascertain that the data
is normally distributed.
PSPP> get file='/usr/local/share/pspp/examples/repairs.sav'.
PSPP> examine mtbf /statistics=descriptives.
This produces the following output:
Descriptives
┌──────────────────────────────────────────────────────────┬─────────┬────────┐
│ │ │ Std. │
│ │Statistic│ Error │
├──────────────────────────────────────────────────────────┼─────────┼────────┤
│Mean time between Mean │ 8.78│ 1.10│
│failures (months) ──────────────────────────────────┼─────────┼────────┤
│ 95% Confidence Interval Lower │ 6.53│ │
│ for Mean Bound │ │ │
│ Upper │ 11.04│ │
│ Bound │ │ │
│ ──────────────────────────────────┼─────────┼────────┤
│ 5% Trimmed Mean │ 8.20│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Median │ 8.29│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Variance │ 36.34│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Std. Deviation │ 6.03│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Minimum │ 1.63│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Maximum │ 26.47│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Range │ 24.84│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Interquartile Range │ 6.03│ │
│ ──────────────────────────────────┼─────────┼────────┤
│ Skewness │ 1.65│ .43│
│ ──────────────────────────────────┼─────────┼────────┤
│ Kurtosis │ 3.41│ .83│
└──────────────────────────────────────────────────────────┴─────────┴────────┘
A normal distribution has a skewness and kurtosis of zero. The skewness of mtbf in the output above makes it clear that the mtbf figures have a lot of positive skew and are therefore not drawn from a normally distributed variable. Positive skew can often be compensated for by applying a logarithmic transformation, as in the following continuation of the example:
PSPP> compute mtbf_ln = ln (mtbf).
PSPP> examine mtbf_ln /statistics=descriptives.
which produces the following additional output:
Descriptives
┌────────────────────────────────────────────────────┬─────────┬──────────┐
│ │Statistic│Std. Error│
├────────────────────────────────────────────────────┼─────────┼──────────┤
│mtbf_ln Mean │ 1.95│ .13│
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ 95% Confidence Interval for Mean Lower Bound│ 1.69│ │
│ Upper Bound│ 2.22│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ 5% Trimmed Mean │ 1.96│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Median │ 2.11│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Variance │ .49│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Std. Deviation │ .70│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Minimum │ .49│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Maximum │ 3.28│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Range │ 2.79│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Interquartile Range │ .88│ │
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Skewness │ ─.37│ .43│
│ ─────────────────────────────────────────────┼─────────┼──────────┤
│ Kurtosis │ .01│ .83│
└────────────────────────────────────────────────────┴─────────┴──────────┘
The COMPUTE command in the first line above performs the logarithmic
transformation: compute mtbf_ln = ln (mtbf). Rather than
redefining the existing variable, this use of COMPUTE defines a new
variable mtbf_ln which is the natural logarithm of mtbf. The final
command in this example calls EXAMINE on this new variable. The
results show that both the skewness and kurtosis for mtbf_ln are very
close to zero. This provides some confidence that the mtbf_ln
variable is normally distributed and thus safe for linear analysis.
In the event that no suitable transformation can be found, then it
would be worth considering an appropriate non-parametric test instead
of a linear one. See NPAR TESTS,
for information about non-parametric tests.
Hypothesis Testing
One of the most fundamental purposes of statistical analysis is hypothesis testing. Researchers commonly need to test hypotheses about a set of data. For example, she might want to test whether one set of data comes from the same distribution as another, or whether the mean of a dataset significantly differs from a particular value. This section presents just some of the possible tests that PSPP offers.
The researcher starts by making a "null hypothesis". Often this is a hypothesis which he suspects to be false. For example, if he suspects that A is greater than B he will state the null hypothesis as A = B.1
The "p-value" is a recurring concept in hypothesis testing. It is the highest acceptable probability that the evidence implying a null hypothesis is false, could have been obtained when the null hypothesis is in fact true. Note that this is not the same as "the probability of making an error" nor is it the same as "the probability of rejecting a hypothesis when it is true".
Testing for differences of means
A common statistical test involves hypotheses about means. The T-TEST
command is used to find out whether or not two separate subsets have the
same mean.
A researcher suspected that the heights and core body temperature of
persons might be different depending upon their sex. To investigate
this, he posed two null hypotheses based on the data from
physiology.sav previously encountered:
-
The mean heights of males and females in the population are equal.
-
The mean body temperature of males and females in the population are equal.
For the purposes of the investigation the researcher decided to use a p-value of 0.05.
In addition to the T-test, the T-TEST command also performs the
Levene test for equal variances. If the variances are equal, then a
more powerful form of the T-test can be used. However if it is unsafe
to assume equal variances, then an alternative calculation is necessary.
PSPP performs both calculations.
For the height variable, the output shows the significance of the Levene test to be 0.33 which means there is a 33% probability that the Levene test produces this outcome when the variances are equal. Had the significance been less than 0.05, then it would have been unsafe to assume that the variances were equal. However, because the value is higher than 0.05 the homogeneity of variances assumption is safe and the "Equal Variances" row (the more powerful test) can be used. Examining this row, the two tailed significance for the height t-test is less than 0.05, so it is safe to reject the null hypothesis and conclude that the mean heights of males and females are unequal.
For the temperature variable, the significance of the Levene test is 0.58 so again, it is safe to use the row for equal variances. The equal variances row indicates that the two tailed significance for temperature is 0.20. Since this is greater than 0.05 we must reject the null hypothesis and conclude that there is insufficient evidence to suggest that the body temperature of male and female persons are different.
The syntax for this analysis is:
PSPP> get file='/usr/local/share/pspp/examples/physiology.sav'.
PSPP> recode height (179 = SYSMIS).
PSPP> t-test group=sex(0,1) /variables = height temperature.
PSPP produces the following output for this syntax:
Group Statistics
┌───────────────────────────────────────────┬──┬───────┬─────────────┬────────┐
│ │ │ │ Std. │ S.E. │
│ Group │ N│ Mean │ Deviation │ Mean │
├───────────────────────────────────────────┼──┼───────┼─────────────┼────────┤
│Height in millimeters Male │22│1796.49│ 49.71│ 10.60│
│ Female│17│1610.77│ 25.43│ 6.17│
├───────────────────────────────────────────┼──┼───────┼─────────────┼────────┤
│Internal body temperature in degrees Male │22│ 36.68│ 1.95│ .42│
│Celcius Female│18│ 37.43│ 1.61│ .38│
└───────────────────────────────────────────┴──┴───────┴─────────────┴────────┘
Independent Samples Test
┌─────────────────────┬──────────┬──────────────────────────────────────────
│ │ Levene's │
│ │ Test for │
│ │ Equality │
│ │ of │
│ │ Variances│ T─Test for Equality of Means
│ ├────┬─────┼─────┬─────┬───────┬──────────┬──────────┐
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ Sig. │ │ │
│ │ │ │ │ │ (2─ │ Mean │Std. Error│
│ │ F │ Sig.│ t │ df │tailed)│Difference│Difference│
├─────────────────────┼────┼─────┼─────┼─────┼───────┼──────────┼──────────┤
│Height in Equal │ .97│ .331│14.02│37.00│ .000│ 185.72│ 13.24│
│millimeters variances│ │ │ │ │ │ │ │
│ assumed │ │ │ │ │ │ │ │
│ Equal │ │ │15.15│32.71│ .000│ 185.72│ 12.26│
│ variances│ │ │ │ │ │ │ │
│ not │ │ │ │ │ │ │ │
│ assumed │ │ │ │ │ │ │ │
├─────────────────────┼────┼─────┼─────┼─────┼───────┼──────────┼──────────┤
│Internal Equal │ .31│ .581│─1.31│38.00│ .198│ ─.75│ .57│
│body variances│ │ │ │ │ │ │ │
│temperature assumed │ │ │ │ │ │ │ │
│in degrees Equal │ │ │─1.33│37.99│ .190│ ─.75│ .56│
│Celcius variances│ │ │ │ │ │ │ │
│ not │ │ │ │ │ │ │ │
│ assumed │ │ │ │ │ │ │ │
└─────────────────────┴────┴─────┴─────┴─────┴───────┴──────────┴──────────┘
┌─────────────────────┬─────────────┐
│ │ │
│ │ │
│ │ │
│ │ │
│ │ │
│ ├─────────────┤
│ │ 95% │
│ │ Confidence │
│ │ Interval of │
│ │ the │
│ │ Difference │
│ ├──────┬──────┤
│ │ Lower│ Upper│
├─────────────────────┼──────┼──────┤
│Height in Equal │158.88│212.55│
│millimeters variances│ │ │
│ assumed │ │ │
│ Equal │160.76│210.67│
│ variances│ │ │
│ not │ │ │
│ assumed │ │ │
├─────────────────────┼──────┼──────┤
│Internal Equal │ ─1.91│ .41│
│body variances│ │ │
│temperature assumed │ │ │
│in degrees Equal │ ─1.89│ .39│
│Celcius variances│ │ │
│ not │ │ │
│ assumed │ │ │
└─────────────────────┴──────┴──────┘
The T-TEST command tests for differences of means. Here, the height
variable's two tailed significance is less than 0.05, so the null
hypothesis can be rejected. Thus, the evidence suggests there is a
difference between the heights of male and female persons. However
the significance of the test for the temperature variable is greater
than 0.05 so the null hypothesis cannot be rejected, and there is
insufficient evidence to suggest a difference in body temperature.
Linear Regression
Linear regression is a technique used to investigate if and how a variable is linearly related to others. If a variable is found to be linearly related, then this can be used to predict future values of that variable.
In the following example, the service department of the company wanted
to be able to predict the time to repair equipment, in order to
improve the accuracy of their quotations. It was suggested that the
time to repair might be related to the time between failures and the
duty cycle of the equipment. The p-value of 0.1 was chosen for this
investigation. In order to investigate this hypothesis, the
REGRESSION command was used. This
command not only tests if the variables are related, but also
identifies the potential linear relationship.
A first attempt includes duty_cycle:
PSPP> get file='/usr/local/share/pspp/examples/repairs.sav'.
PSPP> regression /variables = mtbf duty_cycle /dependent = mttr.
This attempt yields the following output (in part):
Coefficients (Mean time to repair (hours) )
┌────────────────────────┬─────────────────────┬───────────────────┬─────┬────┐
│ │ Unstandardized │ Standardized │ │ │
│ │ Coefficients │ Coefficients │ │ │
│ ├─────────┬───────────┼───────────────────┤ │ │
│ │ B │ Std. Error│ Beta │ t │Sig.│
├────────────────────────┼─────────┼───────────┼───────────────────┼─────┼────┤
│(Constant) │ 10.59│ 3.11│ .00│ 3.40│.002│
│Mean time between │ 3.02│ .20│ .95│14.88│.000│
│failures (months) │ │ │ │ │ │
│Ratio of working to non─│ ─1.12│ 3.69│ ─.02│ ─.30│.763│
│working time │ │ │ │ │ │
└────────────────────────┴─────────┴───────────┴───────────────────┴─────┴────┘
The coefficients in the above table suggest that the formula
\(\textrm{MTTR} = 9.81 + 3.1 \times \textrm{MTBF} + 1.09 \times
\textrm{DUTY_CYCLE}\) can be used to predict the time to repair.
However, the significance value for the DUTY_CYCLE coefficient is
very high, which would make this an unsafe predictor. For this
reason, the test was repeated, but omitting the duty_cycle variable:
PSPP> regression /variables = mtbf /dependent = mttr.
This second try produces the following output (in part):
Coefficients (Mean time to repair (hours) )
┌───────────────────────┬──────────────────────┬───────────────────┬─────┬────┐
│ │ Unstandardized │ Standardized │ │ │
│ │ Coefficients │ Coefficients │ │ │
│ ├─────────┬────────────┼───────────────────┤ │ │
│ │ B │ Std. Error │ Beta │ t │Sig.│
├───────────────────────┼─────────┼────────────┼───────────────────┼─────┼────┤
│(Constant) │ 9.90│ 2.10│ .00│ 4.71│.000│
│Mean time between │ 3.01│ .20│ .94│15.21│.000│
│failures (months) │ │ │ │ │ │
└───────────────────────┴─────────┴────────────┴───────────────────┴─────┴────┘
This time, the significance of all coefficients is no higher than 0.06, suggesting that at the 0.06 level, the formula \(\textrm{MTTR} = 10.5 + 3.11 \times \textrm{MTBF}\) is a reliable predictor of the time to repair.
-
This example assumes that it is already proven that B is not greater than A. ↩
This chapter discusses elements common to many PSPP commands. Later chapters describe individual commands in detail.
Tokens
PSPP divides most syntax file lines into series of short chunks called "tokens". Tokens are then grouped to form commands, each of which tells PSPP to take some action—read in data, write out data, perform a statistical procedure, etc. Each type of token is described below.
Identifiers
Identifiers are names that typically specify variables, commands, or
subcommands. The first character in an identifier must be a letter,
#, or @. The remaining characters in the identifier must be
letters, digits, or one of the following special characters:
. _ $ # @
Identifiers may be any length, but only the first 64 bytes are
significant. Identifiers are not case-sensitive: foobar,
Foobar, FooBar, FOOBAR, and FoObaR are different
representations of the same identifier.
Some identifiers are reserved. Reserved identifiers may not be used in any context besides those explicitly described in this manual. The reserved identifiers are:
ALL AND BY EQ GE GT LE LT NE NOT OR TO WITH
Keywords
Keywords are a subclass of identifiers that form a fixed part of
command syntax. For example, command and subcommand names are
keywords. Keywords may be abbreviated to their first 3 characters
if this abbreviation is unambiguous. (Unique abbreviations of 3 or
more characters are also accepted: FRE, FREQ, and FREQUENCIES
are equivalent when the last is a keyword.)
Reserved identifiers are always used as keywords. Other identifiers may be used both as keywords and as user-defined identifiers, such as variable names.
Numbers
Numbers are expressed in decimal. A decimal point is optional.
Numbers may be expressed in scientific notation by adding e and a
base-10 exponent, so that 1.234e3 has the value 1234. Here are
some more examples of valid numbers:
-5 3.14159265359 1e100 -.707 8945.
Negative numbers are expressed with a - prefix. However, in
situations where a literal - token is expected, what appears to
be a negative number is treated as - followed by a positive
number.
No white space is allowed within a number token, except for
horizontal white space between - and the rest of the number.
The last example above, 8945. is interpreted as two tokens, 8945
and ., if it is the last token on a line (see Forming
Commands).
Strings
Strings are literal sequences of characters enclosed in pairs of
single quotes (') or double quotes ("). To include the
character used for quoting in the string, double it, e.g. 'it''s an apostrophe'. White space and case of letters are significant
inside strings.
Strings can be concatenated using +, so that "a" + 'b' + 'c' is
equivalent to 'abc'. So that a long string may be broken across
lines, a line break may precede or follow, or both precede and
follow, the +. (However, an entirely blank line preceding or
following the + is interpreted as ending the current command.)
Strings may also be expressed as hexadecimal character values by
prefixing the initial quote character by x or X. Regardless of
the syntax file or active dataset's encoding, the hexadecimal
digits in the string are interpreted as Unicode characters in UTF-8
encoding.
Individual Unicode code points may also be expressed by specifying the hexadecimal code point number in single or double quotes preceded by
uorU. For example, Unicode code point U+1D11E, the musical G clef character, could be expressed asU'1D11E'. Invalid Unicode code points (above U+10FFFF or in between U+D800 and U+DFFF) are not allowed.
When strings are concatenated with +, each segment's prefix is
considered individually. For example, 'The G clef symbol is:' + u"1d11e" + "." inserts a G clef symbol in the middle of an
otherwise plain text string.
Punctuators and Operators
These tokens are the punctuators and operators:
, / = ( ) + - * / ** < <= <> > >= ~= & | .
Most of these appear within the syntax of commands, but the period
(.) punctuator is used only at the end of a command. It is a
punctuator only as the last character on a line (except white
space). When it is the last non-space character on a line, a
period is not treated as part of another token, even if it would
otherwise be part of, e.g., an identifier or a floating-point
number.
Forming Commands
Most PSPP commands share a common structure. A command begins with a
command name, such as FREQUENCIES, DATA LIST, or N OF CASES. The
command name may be abbreviated to its first word, and each word in the
command name may be abbreviated to its first three or more characters,
where these abbreviations are unambiguous.
The command name may be followed by one or more "subcommands". Each
subcommand begins with a subcommand name, which may be abbreviated to
its first three letters. Some subcommands accept a series of one or
more specifications, which follow the subcommand name, optionally
separated from it by an equals sign (=). Specifications may be
separated from each other by commas or spaces. Each subcommand must
be separated from the next (if any) by a forward slash (/).
There are multiple ways to mark the end of a command. The most common
way is to end the last line of the command with a period (.) as
described in the previous section. A blank line, or one that consists
only of white space or comments, also ends a command.
Syntax Variants
There are three variants of command syntax, which vary only in how they detect the end of one command and the start of the next.
In "interactive mode", which is the default for syntax typed at a command prompt, a period as the last non-blank character on a line ends a command. A blank line also ends a command.
In "batch mode", an end-of-line period or a blank line also ends a command. Additionally, it treats any line that has a non-blank character in the leftmost column as beginning a new command. Thus, in batch mode the second and subsequent lines in a command must be indented.
Regardless of the syntax mode, a plus sign, minus sign, or period in the leftmost column of a line is ignored and causes that line to begin a new command. This is most useful in batch mode, in which the first line of a new command could not otherwise be indented, but it is accepted regardless of syntax mode.
The default mode for reading commands from a file is "auto mode". It is the same as batch mode, except that a line with a non-blank in the leftmost column only starts a new command if that line begins with the name of a PSPP command. This correctly interprets most valid PSPP syntax files regardless of the syntax mode for which they are intended.
The --interactive (or -i) or --batch (or -b) options set the
syntax mode for files listed on the PSPP command line.
Handling Missing Values
PSPP includes special support for unknown numeric data values. Missing observations are assigned a special value, called the "system-missing value". This "value" actually indicates the absence of a value; it means that the actual value is unknown. Procedures automatically exclude from analyses those observations or cases that have missing values. Details of missing value exclusion depend on the procedure and can often be controlled by the user; refer to descriptions of individual procedures for details.
The system-missing value exists only for numeric variables. String variables always have a defined value, even if it is only a string of spaces.
Variables, whether numeric or string, can have designated "user-missing values". Every user-missing value is an actual value for that variable. However, most of the time user-missing values are treated in the same way as the system-missing value.
Datasets
PSPP works with data organized into "datasets". A dataset consists of a set of "variables", which taken together are said to form a "dictionary", and one or more "cases", each of which has one value for each variable.
At any given time PSPP has exactly one distinguished dataset,
called the "active dataset". Most PSPP commands work only with the
active dataset. In addition to the active dataset, PSPP also supports
any number of additional open datasets. The DATASET
commands can choose a new active dataset
from among those that are open, as well as create and destroy
datasets.
Attributes of Variables
Each variable has a number of attributes, including:
-
Name
An identifier, up to 64 bytes long. Each variable must have a different name.User-defined variable names may not begin with
$.A variable name can with
., but it should not, because such an identifier will be misinterpreted when it is the final token on a line:FOO.is divided into two separate tokens,FOOand., indicating end-of-command.A variable name can end with
_, but it should not, because some PSPP procedures reserve those names for special purposes.Variable names are not case-sensitive. PSPP capitalizes variable names on output the same way they were capitalized at their point of definition in the input.
-
Type
Numeric or string. -
Width (string variables only)
String variables with a width of 8 characters or fewer are called "short string variables", and wider ones are called "long string variables". In a few contexts, long string variables are not allowed. -
Position
Variables in the dictionary are arranged in a specific order.DISPLAYcan show this order. -
Initialization
Either reinitialized to 0 or spaces for each case, or left at its existing value. UseLEAVEto avoid reinitializing a variable. -
Missing values
Optionally, up to three values, or a range of values, or a specific value plus a range, can be specified as "user-missing values". There is also a "system-missing value" that is assigned to an observation when there is no other obvious value for that observation. Observations with missing values are automatically excluded from analyses. User-missing values are actual data values, while the system-missing value is not a value at all. See Handling Missing Values for more information on missing values. TheMISSING VALUEScommand sets missing values. -
Variable label
A string that describes the variable. TheVARIABLE LABELScommand sets variable labels. -
Value label
Optionally, these associate each possible value of the variable with a string. TheVALUE LABELSandADD VALUE LABELScommands set value labels. -
Print format
Display width, format, and (for numeric variables) number of decimal places. This attribute does not affect how data are stored, just how they are displayed. See Input and Output Formats for details. TheFORMATSandPRINT FORMATScommands set print formats. -
Write format
Similar to print format, but used by theWRITEcommand. TheFORMATSandWRITE FORMATScommands set write formats. -
Measurement level
One of the following:-
Nominal: Each value of a nominal variable represents a distinct category. The possible categories are finite and often have value labels. The order of categories is not significant. Political parties, US states, and yes/no choices are nominal. Numeric and string variables can be nominal.
-
Ordinal: Ordinal variables also represent distinct categories, but their values are arranged according to some natural order. Likert scales, e.g. from strongly disagree to strongly agree, are ordinal. Data grouped into ranges, e.g. age groups or income groups, are ordinal. Both numeric and string variables can be ordinal. String values are ordered alphabetically, so letter grades from A to F will work as expected, but
poor,satisfactory,excellentwill not. -
Scale: Scale variables are ones for which differences and ratios are meaningful. These are often values which have a natural unit attached, such as age in years, income in dollars, or distance in miles. Only numeric variables are scalar.
The
VARIABLE LEVELcommand sets measurement levels.Variables created by
COMPUTEand similar transformations, obtained from external sources, etc., initially have an unknown measurement level. Any procedure that reads the data will then assign a default measurement level. PSPP can assign some defaults without reading the data:-
Nominal, if it's a string variable.
-
Nominal, if the variable has a WKDAY or MONTH print format.
-
Scale, if the variable has a DOLLAR, CCA through CCE, or time or date print format.
Otherwise, PSPP reads the data and decides based on its distribution:
-
Nominal, if all observations are missing.
-
Scale, if one or more valid observations are noninteger or negative.
-
Scale, if no valid observation is less than 10.
-
Scale, if the variable has 24 or more unique valid values. The value 24 is the default. Use
SET SCALEMINto change the default.
Finally, if none of the above is true, PSPP assigns the variable a nominal measurement level.
-
-
Custom attributes
User-defined associations between names and values. TheVARIABLE ATTRIBUTEcommand sets variable atributes. -
Role
The intended role of a variable for use in dialog boxes in graphical user interfaces. TheVARIABLE ROLEcommand sets variable roles.
Variable Lists
To refer to a set of variables, list their names one after another.
Optionally, their names may be separated by commas. To include a
range of variables from the dictionary in the list, write the name of
the first and last variable in the range, separated by TO. For
instance, if the dictionary contains six variables with the names
ID, X1, X2, GOAL, MET, and NEXTGOAL, in that order, then
X2 TO MET would include variables X2, GOAL, and MET.
Commands that define variables, such as DATA LIST, give TO an
alternate meaning. With these commands, TO define sequences of
variables whose names end in consecutive integers. The syntax is two
identifiers that begin with the same root and end with numbers,
separated by TO. The syntax X1 TO X5 defines 5 variables, named
X1, X2, X3, X4, and X5. The syntax ITEM0008 TO ITEM0013
defines 6 variables, named ITEM0008, ITEM0009, ITEM0010,
ITEM0011, ITEM0012, and ITEM00013. The syntaxes QUES001 TO QUES9 and QUES6 TO QUES3 are invalid.
After a set of variables has been defined with DATA LIST or
another command with this method, the same set can be referenced on
later commands using the same syntax.
Input and Output Formats
An "input format" describes how to interpret the contents of an input
field as a number or a string. It might specify that the field contains
an ordinary decimal number, a time or date, a number in binary or
hexadecimal notation, or one of several other notations. Input formats
are used by commands such as DATA LIST that read data or syntax files
into the PSPP active dataset.
Every input format corresponds to a default "output format" that specifies the formatting used when the value is output later. It is always possible to explicitly specify an output format that resembles the input format. Usually, this is the default, but in cases where the input format is unfriendly to human readability, such as binary or hexadecimal formats, the default output format is an easier-to-read decimal format.
Every variable has two output formats, called its "print format"
and "write format". Print formats are used in most output contexts;
only the WRITE command uses write
formats. Newly created variables have identical print and write
formats, and FORMATS, the most
commonly used command for changing formats, sets both of them to the
same value as well. This means that the distinction between print and
write formats is usually unimportant.
Input and output formats are specified to PSPP with a "format
specification" of the form TypeW or TypeW.D, where Type is one
of the format types described later, W is a field width measured in
columns, and D is an optional number of decimal places. If D is
omitted, a value of 0 is assumed. Some formats do not allow a nonzero
D to be specified.
Basic Numeric Formats
The basic numeric formats are used for input and output of real numbers in standard or scientific notation. The following table shows an example of how each format displays positive and negative numbers with the default decimal point setting:
| Format | 3141.59 | -3141.59 |
|---|---|---|
F8.2 | 3141.59 | -3141.59 |
COMMA9.2 | 3,141.59 | -3,141.59 |
DOT9.2 | 3.141,59 | -3.141,59 |
DOLLAR10.2 | $3,141.59 | -$3,141.59 |
PCT9.2 | 3141.59% | -3141.59% |
E8.1 | 3.1E+003 | -3.1E+003 |
On output, numbers in F format are expressed in standard decimal
notation with the requested number of decimal places. The other formats
output some variation on this style:
-
Numbers in
COMMAformat are additionally grouped every three digits by inserting a grouping character. The grouping character is ordinarily a comma, but it can be changed to a period (withSET DECIMAL). -
DOTformat is likeCOMMAformat, but it interchanges the role of the decimal point and grouping characters. That is, the current grouping character is used as a decimal point and vice versa. -
DOLLARformat is likeCOMMAformat, but it prefixes the number with$. -
PCTformat is likeFformat, but adds%after the number. -
The
Eformat always produces output in scientific notation.
On input, the basic numeric formats accept positive and numbers in standard decimal notation or scientific notation. Leading and trailing spaces are allowed. An empty or all-spaces field, or one that contains only a single period, is treated as the system missing value.
In scientific notation, the exponent may be introduced by a sign (+
or -), or by one of the letters e or d (in uppercase or
lowercase), or by a letter followed by a sign. A single space may
follow the letter or the sign or both.
On fixed-format DATA LIST and in
a few other contexts, decimals are implied when the field does not
contain a decimal point. In F6.5 format, for example, the field
314159 is taken as the value 3.14159 with implied decimals.
Decimals are never implied if an explicit decimal point is present or
if scientific notation is used.
E and F formats accept the basic syntax already described. The other
formats allow some additional variations:
-
COMMA,DOLLAR, andDOTformats ignore grouping characters within the integer part of the input field. The identity of the grouping character depends on the format. -
DOLLARformat allows a dollar sign to precede the number. In a negative number, the dollar sign may precede or follow the minus sign. -
PCTformat allows a percent sign to follow the number.All of the basic number formats have a maximum field width of 40 and accept no more than 16 decimal places, on both input and output. Some additional restrictions apply:
-
As input formats, the basic numeric formats allow no more decimal places than the field width. As output formats, the field width must be greater than the number of decimal places; that is, large enough to allow for a decimal point and the number of requested decimal places.
DOLLARandPCTformats must allow an additional column for$or%. -
The default output format for a given input format increases the field width enough to make room for optional input characters. If an input format calls for decimal places, the width is increased by 1 to make room for an implied decimal point.
COMMA,DOT, andDOLLARformats also increase the output width to make room for grouping characters.DOLLARandPCTfurther increase the output field width by 1 to make room for$or%. The increased output width is capped at 40, the maximum field width. -
The
Eformat is exceptional. For output,Eformat has a minimum width of 7 plus the number of decimal places. The default output format for anEinput format is anEformat with at least 3 decimal places and thus a minimum width of 10.
More details of basic numeric output formatting are given below:
-
Output rounds to nearest, with ties rounded away from zero. Thus, 2.5 is output as
3inF1.0format, and -1.125 as-1.13inF5.1format. -
The system-missing value is output as a period in a field of spaces, placed in the decimal point's position, or in the rightmost column if no decimal places are requested. A period is used even if the decimal point character is a comma.
-
A number that does not fill its field is right-justified within the field.
-
A number is too large for its field causes decimal places to be dropped to make room. If dropping decimals does not make enough room, scientific notation is used if the field is wide enough. If a number does not fit in the field, even in scientific notation, the overflow is indicated by filling the field with asterisks (
*). -
COMMA,DOT, andDOLLARformats insert grouping characters only if space is available for all of them. Grouping characters are never inserted when all decimal places must be dropped. Thus, 1234.56 inCOMMA5.2format is output as1235without a comma, even though there is room for one, because all decimal places were dropped. -
DOLLARorPCTformat drop the$or%only if the number would not fit at all without it. Scientific notation with$or%is preferred to ordinary decimal notation without it. -
Except in scientific notation, a decimal point is included only when it is followed by a digit. If the integer part of the number being output is 0, and a decimal point is included, then PSPP ordinarily drops the zero before the decimal point. However, in
F,COMMA, orDOTformats, PSPP keeps the zero ifSET LEADZEROis set toON.In scientific notation, the number always includes a decimal point, even if it is not followed by a digit.
-
A negative number includes a minus sign only in the presence of a nonzero digit: -0.01 is output as
-.01inF4.2format but as.0inF4.1format. Thus, a "negative zero" never includes a minus sign. -
In negative numbers output in
DOLLARformat, the dollar sign follows the negative sign. Thus, -9.99 inDOLLAR6.2format is output as-$9.99. -
In scientific notation, the exponent is output as
Efollowed by+or-and exactly three digits. Numbers with magnitude less than 10**-999 or larger than 10**999 are not supported by most computers, but if they are supported then their output is considered to overflow the field and they are output as asterisks. -
On most computers, no more than 15 decimal digits are significant in output, even if more are printed. In any case, output precision cannot be any higher than input precision; few data sets are accurate to 15 digits of precision. Unavoidable loss of precision in intermediate calculations may also reduce precision of output.
-
Special values such as infinities and "not a number" values are usually converted to the system-missing value before printing. In a few circumstances, these values are output directly. In fields of width 3 or greater, special values are output as however many characters fit from
+Infinityor-Infinityfor infinities, fromNaNfor "not a number," or fromUnknownfor other values (if any are supported by the system). In fields under 3 columns wide, special values are output as asterisks.
Custom Currency Formats
The custom currency formats are closely related to the basic numeric formats, but they allow users to customize the output format. The SET command configures custom currency formats, using the syntax
SET CCX="STRING".
where X is A, B, C, D, or E, and STRING is no more than 16
characters long.
STRING must contain exactly three commas or exactly three periods
(but not both), except that a single quote character may be used to
"escape" a following comma, period, or single quote. If three commas
are used, commas are used for grouping in output, and a period is used
as the decimal point. Uses of periods reverses these roles.
The commas or periods divide STRING into four fields, called the
"negative prefix", "prefix", "suffix", and "negative suffix",
respectively. The prefix and suffix are added to output whenever
space is available. The negative prefix and negative suffix are
always added to a negative number when the output includes a nonzero
digit.
The following syntax shows how custom currency formats could be used to reproduce basic numeric formats:
SET CCA="-,,,". /* Same as COMMA.
SET CCB="-...". /* Same as DOT.
SET CCC="-,$,,". /* Same as DOLLAR.
SET CCD="-,,%,". /* Like PCT, but groups with commas.
Here are some more examples of custom currency formats. The final example shows how to use a single quote to escape a delimiter:
SET CCA=",EUR,,-". /* Euro.
SET CCB="(,USD ,,)". /* US dollar.
SET CCC="-.R$..". /* Brazilian real.
SET CCD="-,, NIS,". /* Israel shekel.
SET CCE="-.Rp'. ..". /* Indonesia Rupiah.
These formats would yield the following output:
| Format | 3145.59 | -3145.59 |
|---|---|---|
CCA12.2 | EUR3,145.59 | EUR3,145.59- |
CCB14.2 | USD 3,145.59 | (USD 3,145.59) |
CCC11.2 | R$3.145,59 | -R$3.145,59 |
CCD13.2 | 3,145.59 NIS | -3,145.59 NIS |
CCE10.0 | Rp. 3.146 | -Rp. 3.146 |
The default for all the custom currency formats is -,,,, equivalent
to COMMA format.
Legacy Numeric Formats
The N and Z numeric formats provide compatibility with legacy file
formats. They have much in common:
-
Output is rounded to the nearest representable value, with ties rounded away from zero.
-
Numbers too large to display are output as a field filled with asterisks (
*). -
The decimal point is always implicitly the specified number of digits from the right edge of the field, except that
Zformat input allows an explicit decimal point. -
Scientific notation may not be used.
-
The system-missing value is output as a period in a field of spaces. The period is placed just to the right of the implied decimal point in
Zformat, or at the right end inNformat or inZformat if no decimal places are requested. A period is used even if the decimal point character is a comma. -
Field width may range from 1 to 40. Decimal places may range from 0 up to the field width, to a maximum of 16.
-
When a legacy numeric format used for input is converted to an output format, it is changed into the equivalent
Fformat. The field width is increased by 1 if any decimal places are specified, to make room for a decimal point. ForZformat, the field width is increased by 1 more column, to make room for a negative sign. The output field width is capped at 40 columns.
N Format
The N format supports input and output of fields that contain only
digits. On input, leading or trailing spaces, a decimal point, or any
other non-digit character causes the field to be read as the
system-missing value. As a special exception, an N format used on
DATA LIST FREE or DATA LIST LIST is treated as the equivalent F
format.
On output, N pads the field on the left with zeros. Negative
numbers are output like the system-missing value.
Z Format
The Z format is a "zoned decimal" format used on IBM mainframes. Z
format encodes the sign as part of the final digit, which must be one of
the following:
0123456789
{ABCDEFGHI
}JKLMNOPQR
where the characters on each line represent digits 0 through 9 in order. Characters on the first two lines indicate a positive sign; those on the third indicate a negative sign.
On output, Z fields are padded on the left with spaces. On
input, leading and trailing spaces are ignored. Any character in an
input field other than spaces, the digit characters above, and .
causes the field to be read as system-missing.
The decimal point character for input and output is always .,
even if the decimal point character is a comma (see SET DECIMAL).
Nonzero, negative values output in Z format are marked as
negative even when no nonzero digits are output. For example, -0.2 is
output in Z1.0 format as J. The "negative zero" value supported
by most machines is output as positive.
Binary and Hexadecimal Numeric Formats
The binary and hexadecimal formats are primarily designed for
compatibility with existing machine formats, not for human
readability. All of them therefore have a F format as default
output format. Some of these formats are only portable between
machines with compatible byte ordering (endianness).
Binary formats use byte values that in text files are interpreted
as special control functions, such as carriage return and line feed.
Thus, data in binary formats should not be included in syntax files or
read from data files with variable-length records, such as ordinary
text files. They may be read from or written to data files with
fixed-length records. See FILE HANDLE, for information on
working with fixed-length records.
P and PK Formats
These are binary-coded decimal formats, in which every byte (except
the last, in P format) represents two decimal digits. The
most-significant 4 bits of the first byte is the most-significant
decimal digit, the least-significant 4 bits of the first byte is the
next decimal digit, and so on.
In P format, the most-significant 4 bits of the last byte are the
least-significant decimal digit. The least-significant 4 bits
represent the sign: decimal 15 indicates a negative value, decimal 13
indicates a positive value.
Numbers are rounded downward on output. The system-missing value and numbers outside representable range are output as zero.
The maximum field width is 16. Decimal places may range from 0 up to the number of decimal digits represented by the field.
The default output format is an F format with twice the input
field width, plus one column for a decimal point (if decimal places
were requested).
IB and PIB Formats
These are integer binary formats. IB reads and writes 2's
complement binary integers, and PIB reads and writes unsigned binary
integers. The byte ordering is by default the host machine's, but
SET RIB may be used to select a
specific byte ordering for reading and SET WIB, similarly, for writing.
The maximum field width is 8. Decimal places may range from 0 up to the number of decimal digits in the largest value representable in the field width.
The default output format is an F format whose width is the
number of decimal digits in the largest value representable in the
field width, plus 1 if the format has decimal places.
RB Format
This is a binary format for real numbers. It reads and writes the
host machine's floating-point format. The byte ordering is by default
the host machine's, but SET RIB may
be used to select a specific byte ordering for reading and SET WIB, similarly, for writing.
The field width should be 4, for 32-bit floating-point numbers, or 8, for 64-bit floating-point numbers. Other field widths do not produce useful results. The maximum field width is 8. No decimal places may be specified.
The default output format is F8.2.
PIBHEX and RBHEX Formats
These are hexadecimal formats, for reading and writing binary formats where each byte has been recoded as a pair of hexadecimal digits.
A hexadecimal field consists solely of hexadecimal digits 0...9
and A...F. Uppercase and lowercase are accepted on input; output is
in uppercase.
Other than the hexadecimal representation, these formats are
equivalent to PIB and RB formats, respectively. However, bytes in
PIBHEX format are always ordered with the most-significant byte
first (big-endian order), regardless of the host machine's native byte
order or PSPP settings.
Field widths must be even and between 2 and 16. RBHEX format
allows no decimal places; PIBHEX allows as many decimal places as a
PIB format with half the given width.
Time and Date Formats
In PSPP, a "time" is an interval. The time formats translate between human-friendly descriptions of time intervals and PSPP's internal representation of time intervals, which is simply the number of seconds in the interval. PSPP has three time formats:
| Time Format | Template | Example |
|---|---|---|
| MTIME | MM:SS.ss | 91:17.01 |
| TIME | hh:MM:SS.ss | 01:31:17.01 |
| DTIME | DD HH:MM:SS.ss | 00 04:31:17.01 |
A "date" is a moment in the past or the future. Internally, PSPP represents a date as the number of seconds since the "epoch", midnight, Oct. 14, 1582. The date formats translate between human-readable dates and PSPP's numeric representation of dates and times. PSPP has several date formats:
| Date Format | Template | Example |
|---|---|---|
| DATE | dd-mmm-yyyy | 01-OCT-1978 |
| ADATE | mm/dd/yyyy | 10/01/1978 |
| EDATE | dd.mm.yyyy | 01.10.1978 |
| JDATE | yyyyjjj | 1978274 |
| SDATE | yyyy/mm/dd | 1978/10/01 |
| QYR | q Q yyyy | 3 Q 1978 |
| MOYR | mmm yyyy | OCT 1978 |
| WKYR | ww WK yyyy | 40 WK 1978 |
| DATETIME | dd-mmm-yyyy HH:MM:SS.ss | 01-OCT-1978 04:31:17.01 |
| YMDHMS | yyyy-mm-dd HH:MM:SS.ss | 1978-01-OCT 04:31:17.01 |
The templates in the preceding tables describe how the time and date formats are input and output:
-
dd
Day of month, from 1 to 31. Always output as two digits. -
mm
mmm
Month. In output,mmis output as two digits,mmmas the first three letters of an English month name (January, February, ...). In input, both of these formats, plus Roman numerals, are accepted. -
yyyy
Year. In output,DATETIMEandYMDHMSalways produce 4-digit years; other formats can produce a 2- or 4-digit year. The century assumed for 2-digit years depends on theEPOCHsetting. In output, a year outside the epoch causes the whole field to be filled with asterisks (*). -
jjj
Day of year (Julian day), from 1 to 366. This is exactly three digits giving the count of days from the start of the year. January 1 is considered day 1. -
q
Quarter of year, from 1 to 4. Quarters start on January 1, April 1, July 1, and October 1. -
ww
Week of year, from 1 to 53. Output as exactly two digits. January 1 is the first day of week 1. -
DD
Count of days, which may be positive or negative. Output as at least two digits. -
hh
Count of hours, which may be positive or negative. Output as at least two digits. -
HH
Hour of day, from 0 to 23. Output as exactly two digits. -
MM
In MTIME, count of minutes, which may be positive or negative. Output as at least two digits.In other formats, minute of hour, from 0 to 59. Output as exactly two digits.
-
SS.ss
Seconds within minute, from 0 to 59. The integer part is output as exactly two digits. On output, seconds and fractional seconds may or may not be included, depending on field width and decimal places. On input, seconds and fractional seconds are optional. TheDECIMALsetting controls the character accepted and displayed as the decimal point (seeSET DECIMAL).For output, the date and time formats use the delimiters indicated in the table. For input, date components may be separated by spaces or by one of the characters
-,/,., or,, and time components may be separated by spaces or:. On input, theQseparating quarter from year and theWKseparating week from year may be uppercase or lowercase, and the spaces around them are optional.On input, all time and date formats accept any amount of leading and trailing white space.
The maximum width for time and date formats is 40 columns. Minimum input and output width for each of the time and date formats is shown below:
| Format | Min. Input Width | Min. Output Width | Option |
|---|---|---|---|
DATE | 8 | 9 | 4-digit year |
ADATE | 8 | 8 | 4-digit year |
EDATE | 8 | 8 | 4-digit year |
JDATE | 5 | 5 | 4-digit year |
SDATE | 8 | 8 | 4-digit year |
QYR | 4 | 6 | 4-digit year |
MOYR | 6 | 6 | 4-digit year |
WKYR | 6 | 8 | 4-digit year |
DATETIME | 17 | 17 | seconds |
YMDHMS | 12 | 16 | seconds |
MTIME | 4 | 5 | |
TIME | 5 | 5 | seconds |
DTIME | 8 | 8 | seconds |
In the table, "Option" describes what increased output width enables:
-
"4-digit year": A field 2 columns wider than the minimum includes a 4-digit year. (
DATETIMEandYMDHMSformats always include a 4-digit year.) -
"seconds": A field 3 columns wider than the minimum includes seconds as well as minutes. A field 5 columns wider than minimum, or more, can also include a decimal point and fractional seconds (but no more than allowed by the format's decimal places).
For the time and date formats, the default output format is the same as the input format, except that PSPP increases the field width, if necessary, to the minimum allowed for output.
Time or dates narrower than the field width are right-justified within the field.
When a time or date exceeds the field width, characters are trimmed from the end until it fits. This can occur in an unusual situation, e.g. with a year greater than 9999 (which adds an extra digit), or for a negative value on
MTIME,TIME, orDTIME(which adds a leading minus sign).The system-missing value is output as a period at the right end of the field.
Date Component Formats
The WKDAY and MONTH formats provide input and output for the names of
weekdays and months, respectively.
On output, these formats convert a number between 1 and 7, for
WKDAY, or between 1 and 12, for MONTH, into the English name of a
day or month, respectively. If the name is longer than the field, it
is trimmed to fit. If the name is shorter than the field, it is
padded on the right with spaces. Values outside the valid range, and
the system-missing value, are output as all spaces.
On input, English weekday or month names (in uppercase or lowercase) are converted back to their corresponding numbers. Weekday and month names may be abbreviated to their first 2 or 3 letters, respectively.
The field width may range from 2 to 40, for WKDAY, or from 3 to
40, for MONTH. No decimal places are allowed.
The default output format is the same as the input format.
String Formats
The A and AHEX formats are the only ones that may be assigned to
string variables. Neither format allows any decimal places.
In A format, the entire field is treated as a string value. The
field width may range from 1 to 32,767, the maximum string width. The
default output format is the same as the input format.
In AHEX format, the field is composed of characters in a string
encoded as hex digit pairs. On output, hex digits are output in
uppercase; on input, uppercase and lowercase are both accepted. The
default output format is A format with half the input width.
Scratch Variables
Most of the time, variables don't retain their values between cases.
Instead, either they're being read from a data file or the active
dataset, in which case they assume the value read, or, if created with
COMPUTE or another transformation, they're initialized to the
system-missing value or to blanks, depending on type.
However, sometimes it's useful to have a variable that keeps its
value between cases. You can do this with
LEAVE, or you can use a "scratch
variable". Scratch variables are variables whose names begin with an
octothorpe (#).
Scratch variables have the same properties as variables left with
LEAVE: they retain their values between cases, and for the first
case they are initialized to 0 or blanks. They have the additional
property that they are deleted before the execution of any procedure.
For this reason, scratch variables can't be used for analysis. To use
a scratch variable in an analysis, use
COMPUTE to copy its value into an
ordinary variable, then use that ordinary variable in the analysis.
Files Used by PSPP
PSPP makes use of many files each time it runs. Some of these it reads, some it writes, some it creates. Here is a table listing the most important of these files:
-
command file
syntax file
These names (synonyms) refer to the file that contains instructions that tell PSPP what to do. The syntax file's name is specified on the PSPP command line. Syntax files can also be read withINCLUDEorINSERT. -
data file
Data files contain raw data in text or binary format. Data can also be embedded in a syntax file withBEGIN DATAandEND DATA. -
listing file
One or more output files are created by PSPP each time it is run. The output files receive the tables and charts produced by statistical procedures. The output files may be in any number of formats, depending on how PSPP is configured. -
system file
System files are binary files that store a dictionary and a set of cases.GETandSAVEread and write system files. -
portable file
Portable files are files in a text-based format that store a dictionary and a set of cases.IMPORTandEXPORTread and write portable files.
File Handles
A "file handle" is a reference to a data file, system file, or portable
file. Most often, a file handle is specified as the name of a file as a
string, that is, enclosed within ' or ".
A file name string that begins or ends with | is treated as the
name of a command to pipe data to or from. You can use this feature to
read data over the network using a program such as curl (e.g. GET '|curl -s -S http://example.com/mydata.sav'), to read compressed data
from a file using a program such as zcat (e.g. GET '|zcat mydata.sav.gz'), and for many other purposes.
PSPP also supports declaring named file handles with the FILE HANDLE command. This command
associates an identifier of your choice (the file handle's name) with
a file. Later, the file handle name can be substituted for the name
of the file. When PSPP syntax accesses a file multiple times,
declaring a named file handle simplifies updating the syntax later to
use a different file. Use of FILE HANDLE is also required to read
data files in binary formats.
In some circumstances, PSPP must distinguish whether a file handle
refers to a system file or a portable file. When this is necessary to
read a file, e.g. as an input file for GET or MATCH FILES, PSPP uses
the file's contents to decide. In the context of writing a file, e.g.
as an output file for SAVE or AGGREGATE, PSPP decides based on the
file's name: if it ends in .por (with any capitalization), then PSPP
writes a portable file; otherwise, PSPP writes a system file.
INLINE is reserved as a file handle name. It refers to the "data
file" embedded into the syntax file between BEGIN DATA and END DATA.
The file to which a file handle refers may be reassigned on a later
FILE HANDLE command if it is first closed using CLOSE FILE HANDLE.
Syntax Diagrams
The syntax of PSPP commands is presented in this manual with syntax diagrams.
A syntax diagram is a series of definitions of "nonterminals". Each
nonterminal is defined its name, then ::=, then what the nonterminal
consists of. If a nonterminal has multiple definitions, then any of
them is acceptable. If the definition is empty, then one possible
expansion of that nonterminal is nothing. Otherwise, the definition
consists of a series of nonterminals and "terminals". The latter
represent single tokens and consist of:
-
KEYWORD
Any word written in uppercase is that literal syntax keyword. -
number
A real number. -
integer
An integer number. -
string
A string. -
var-name
A single variable name. -
=,/,+,-, etc.
Operators and punctuators. -
.
The end of the command. This is not necessarily an actual dot in the syntax file (see Forming Commands).
Some nonterminals are very common, so they are defined here in English for clarity:
-
var-list
A list of one or more variable names or the keywordALL. -
expression
An expression.
The first nonterminal defined in a syntax diagram for a command is the entire syntax for that command.
Mathematical Expressions
Expressions share a common syntax each place they appear in PSPP commands. Expressions are made up of "operands", which can be numbers, strings, variable names, or invocations of functions, separated by "operators".
Boolean Values
Some PSPP operators and expressions work with Boolean values, which represent true/false conditions. Booleans have only three possible values: 0 (false), 1 (true), and system-missing (unknown). System-missing is neither true nor false and indicates that the true value is unknown.
Boolean-typed operands or function arguments must take on one of these three values. Other values are considered false, but provoke a warning when the expression is evaluated.
Strings and Booleans are not compatible, and neither may be used in place of the other.
Missing Values
Most numeric operators yield system-missing when given any system-missing operand. A string operator given any system-missing operand typically results in the empty string. Exceptions are listed under particular operator descriptions.
String user-missing values are not treated specially in expressions.
User-missing values for numeric variables are always transformed into
the system-missing value, except inside the arguments to the VALUE and
SYSMIS functions.
The missing-value functions can be used to precisely control how missing values are treated in expressions.
Order of Operations
The following table describes operator precedence. Smaller-numbered levels in the table have higher precedence. Within a level, operations are always performed from left to right.
()**- Unary
+and- * /- Binary
+and- = >= > <= < <>NOTANDOR
Operators
Every operator takes one or more operands as input and yields exactly one result as output. Depending on the operator, operands accept strings or numbers as operands. With few exceptions, operands may be full-fledged expressions in themselves.
Grouping Operators
Parentheses (()) are the grouping operators. Surround an expression
with parentheses to force early evaluation.
Parentheses also surround the arguments to functions, but in that situation they act as punctuators, not as operators.
Arithmetic Operators
The arithmetic operators take numeric operands and produce numeric results.
-
A + B
A - B
Addition and subtraction. -
A * B
Multiplication. If eitherAorBis 0, then the result is 0, even if the other operand is missing. -
A / B
Division. IfAis 0, then the result is 0, even ifBis missing. IfBis zero, the result is system-missing. -
A ** B
Araised to the powerB. IfAis negative andBis not an integer, the result is system-missing.0**0is also system-missing. -
-A
Reverses the sign ofA.
Logical Operators
The logical operators take logical operands and produce logical results, meaning "true or false." Logical operators are not true Boolean operators because they may also result in a system-missing value. See Boolean Values, above, for more information.
-
A AND B
A & B
True if bothAandBare true, false otherwise. If one operand is false, the result is false even if the other is missing. If both operands are missing, the result is missing. -
A OR B
A | B
True if at least one ofAandBis true. If one operand is true, the result is true even if the other operand is missing. If both operands are missing, the result is missing. -
NOT A
~A
True ifAis false. If the operand is missing, then the result is missing.
The overall truth table for the binary logical operators is:
A | B | A AND B | A OR B |
|---|---|---|---|
| false | false | false | false |
| false | true | false | true |
| true | false | false | true |
| true | true | true | true |
| false | missing | false | missing |
| true | missing | missing | true |
| missing | false | false | missing |
| missing | true | missing | true |
| missing | missing | missing | missing |
Relational Operators
The relational operators take numeric or string operands and produce Boolean results.
Strings cannot be compared to numbers. When strings of different lengths are compared, the shorter string is right-padded with spaces to match the length of the longer string.
The results of string comparisons, other than tests for equality or inequality, depend on the character set in use. String comparisons are case-sensitive.
-
A EQ B
A = B
True ifAis equal toB. -
A LE B
A <= B
True ifAis less than or equal toB. -
A LT B
A < B
True ifAis less thanB. -
A GE B
A >= B
True ifAis greater than or equal toB. -
A GT B
A > B
True ifAis greater thanB. -
A NE B
A ~= B
A <> B
True ifAis not equal toB.
Functions
PSPP functions provide mathematical abilities above and beyond those possible using simple operators. Functions have a common syntax: each is composed of a function name followed by a left parenthesis, one or more arguments, and a right parenthesis.
Function names are not reserved. Their names are specially treated
only when followed by a left parenthesis, so that EXP(10) refers to
the constant value e raised to the 10th power, but EXP by itself
refers to the value of a variable called EXP.
Mathematical Functions
Mathematical functions take numeric arguments and produce numeric results.
-
ABS(X)
Results in the absolute value ofX. -
EXP(EXPONENT)
Returns e (approximately 2.71828) raised to powerEXPONENT. -
LG10(X)
Takes the base-10 logarithm ofX. IfXis not positive, the result is system-missing. -
LN(X)
Takes the base-e logarithm ofX. IfXis not positive, the result is system-missing. -
LNGAMMA(X)
Yields the base-e logarithm of the complete gamma ofX. IfXis a negative integer, the result is system-missing. -
MOD(A, B)
Returns the remainder (modulus) ofAdivided byB. IfAis 0, then the result is 0, even ifBis missing. IfBis 0, the result is system-missing. -
MOD10(X)
Returns the remainder whenXis divided by 10. IfXis negative,MOD10(X)is negative or zero. -
RND(X [, MULT[, FUZZBITS]])
RoundsXand rounds it to a multiple ofMULT(by default 1). Halves are rounded away from zero, as are values that fall short of halves by less thanFUZZBITSof errors in the least-significant bits of X. IfFUZZBITSis not specified then the default is taken fromSET FUZZBITS, which is 6 unless overridden. -
SQRT(X)
Takes the square root ofX. IfXis negative, the result is system-missing. -
TRUNC(X [, MULT[, FUZZBITS]])
RoundsXto a multiple ofMULT, toward zero. For the defaultMULTof 1, this is equivalent to discarding the fractional part ofX. Values that fall short of a multiple ofMULTby less thanFUZZBITSof errors in the least-significant bits ofXare rounded away from zero. IfFUZZBITSis not specified then the default is taken fromSET FUZZBITS, which is 6 unless overridden.
Trigonometric Functions
Trigonometric functions take numeric arguments and produce numeric results.
-
ARCOS(X)
ACOS(X)
Takes the arccosine, in radians, ofX. Results in system-missing ifXis not between -1 and 1 inclusive. This function is a PSPP extension. -
ARSIN(X)
ASIN(X)
Takes the arcsine, in radians, ofX. Results in system-missing ifXis not between -1 and 1 inclusive. -
ARTAN(X)
ATAN(X)
Takes the arctangent, in radians, ofX. -
COS(ANGLE)
Takes the cosine ofANGLEwhich should be in radians. -
SIN(ANGLE)
Takes the sine ofANGLEwhich should be in radians. -
TAN(ANGLE)
Takes the tangent ofANGLEwhich should be in radians. Results in system-missing at values of ANGLE that are too close to odd multiples of π/2.
Missing-Value Functions
Missing-value functions take various numeric arguments and yield various types of results. Except where otherwise stated below, the normal rules of evaluation apply within expression arguments to these functions. In particular, user-missing values for numeric variables are converted to system-missing values.
-
MISSING(EXPR)
WhenEXPRis simply the name of a numeric variable, returns 1 if the variable has the system-missing value or if it is user-missing. For any other value 0 is returned. IfEXPRis any other kind of expression, the function returns 1 if the value is system-missing, 0 otherwise. -
NMISS(EXPR [, EXPR]...)
Each argument must be a numeric expression. Returns the number of system-missing values in the list, which may include variable ranges using theVAR1 TO VAR2syntax. -
NVALID(EXPR [, EXPR]...)
Each argument must be a numeric expression. Returns the number of values in the list that are not system-missing. The list may include variable ranges using theVAR1 TO VAR2syntax. -
SYSMIS(EXPR)
Returns 1 ifEXPRhas the system-missing value, 0 otherwise. -
VALUE(VARIABLE)
VALUE(VECTOR(INDEX))
Prevents the user-missing values of the variable or vector element from being transformed into system-missing values, and always results in its actual value, whether it is valid, user-missing, or system-missing.
Set Membership Functions
Set membership functions determine whether a value is a member of a set. They take a set of numeric arguments or a set of string arguments, and produce Boolean results.
String comparisons are performed according to the rules given for Relational Operators. User-missing string values are treated as valid values.
-
ANY(VALUE, SET [, SET]...)
Returns true ifVALUEis equal to any of theSETvalues, and false otherwise. For numeric arguments, returns system-missing ifVALUEis system-missing or if all the values inSETare system-missing. -
RANGE(VALUE, LOW, HIGH [, LOW, HIGH]...)
Returns true ifVALUEis in any of the intervals bounded byLOWandHIGH, inclusive, and false otherwise.LOWandHIGHmust be given in pairs. Returns system-missing if anyHIGHis less than itsLOWor, for numeric arguments, ifVALUEis system-missing or if all theLOW-HIGHpairs contain one (or two) system-missing values. A pair does not matchVALUEif eitherLOWorHIGHis missing, even ifVALUEequals the non-missing endpoint.
Statistical Functions
Statistical functions compute descriptive statistics on a list of values. Some statistics can be computed on numeric or string values; other can only be computed on numeric values. Their results have the same type as their arguments. The current case's weight has no effect on statistical functions.
These functions' argument lists may include entire ranges of
variables using the VAR1 TO VAR2 syntax.
Unlike most functions, statistical functions can return non-missing
values even when some of their arguments are missing. Most
statistical functions, by default, require only one non-missing value
to have a non-missing return; CFVAR, SD, and VARIANCE require 2.
These defaults can be increased (but not decreased) by appending a dot
and the minimum number of valid arguments to the function name. For
example, MEAN.3(X, Y, Z) would only return non-missing if all of
X, Y, and Z were valid.
-
CFVAR(NUMBER, NUMBER[, ...])
Results in the coefficient of variation of the values ofNUMBER. (The coefficient of variation is the standard deviation divided by the mean.) -
MAX(VALUE, VALUE[, ...])
Results in the value of the greatestVALUE. TheVALUEs may be numeric or string. -
MEAN(NUMBER, NUMBER[, ...])
Results in the mean of the values ofNUMBER. -
MEDIAN(NUMBER, NUMBER[, ...])
Results in the median of the values ofNUMBER. Given an even number of nonmissing arguments, yields the mean of the two middle values. -
MIN(NUMBER, NUMBER[, ...])
Results in the value of the leastVALUE. TheVALUEs may be numeric or string. -
SD(NUMBER, NUMBER[, ...])
Results in the standard deviation of the values ofNUMBER. -
SUM(NUMBER, NUMBER[, ...])
Results in the sum of the values ofNUMBER. -
VARIANCE(NUMBER, NUMBER[, ...])
Results in the variance of the values ofNUMBER.
String Functions
String functions take various arguments and return various results.
-
CONCAT(STRING, STRING[, ...])
Returns a string consisting of eachSTRINGin sequence.CONCAT("abc", "def", "ghi")has a value of"abcdefghi". The resultant string is truncated to a maximum of 32767 bytes. -
INDEX(HAYSTACK, NEEDLE)
RINDEX(HAYSTACK, NEEDLE)
Returns a positive integer indicating the position of the first (forINDEX) or last (forRINDEX) occurrence ofNEEDLEin HAYSTACK. Returns 0 if HAYSTACK does not containNEEDLE. Returns 1 ifNEEDLEis the empty string. -
INDEX(HAYSTACK, NEEDLES, NEEDLE_LEN)
RINDEX(HAYSTACK, NEEDLE, NEEDLE_LEN)
DividesNEEDLESinto multiple needles, each with lengthNEEDLE_LEN, which must be a positive integer that evenly divides the length ofNEEDLES. SearchesHAYSTACKfor the occurrences of each needle and returns a positive integer indicating the byte index of the beginning of the first (forINDEX) or last (forRINDEX) needle it finds. Returns 0 ifHAYSTACKdoes not contain any of the needles, or ifNEEDLESis the empty string. -
LENGTH(STRING)
Returns the number of bytes inSTRING. -
LOWER(STRING)
Returns a string identical toSTRINGexcept that all uppercase letters are changed to lowercase letters. The definitions of "uppercase" and "lowercase" are encoding-dependent. -
LPAD(STRING, LENGTH[, PADDING])
RPAD(STRING, LENGTH[, PADDING])
IfSTRINGis at leastLENGTHbytes long, these functions returnSTRINGunchanged. Otherwise, they returnSTRINGpadded withPADDINGon the left side (forLPAD) or right side (forRPAD) toLENGTHbytes. These functions report an error and returnSTRINGunchanged ifLENGTHis missing or bigger than 32767.The
PADDINGargument must not be an empty string and defaults to a space if not specified. If its length does not evenly fit the amount of space needed for padding, the returned string will be shorter thanLENGTH. -
LTRIM(STRING[, PADDING])
RTRIM(STRING[, PADDING])
These functions returnSTRING, after removing leading (forLTRIM) or trailing (forRTRIM) copies ofPADDING. IfPADDINGis omitted, these functions remove spaces (but not tabs or other white space). These functions returnSTRINGunchanged ifPADDINGis the empty string. -
NUMBER(STRING, FORMAT)
Returns the number produced whenSTRINGis interpreted according to format specifierFORMAT. If the format widthWis less than the length ofSTRING, then only the firstWbytes inSTRINGare used, e.g.NUMBER("123", F3.0)andNUMBER("1234", F3.0)both have value 123. IfWis greater thanSTRING's length, then it is treated as if it were right-padded with spaces. IfSTRINGis not in the correct format forFORMAT, system-missing is returned. -
REPLACE(HAYSTACK, NEEDLE, REPLACEMENT[, N])
Returns stringHAYSTACKwith instances ofNEEDLEreplaced byREPLACEMENT. If nonnegative integerNis specified, it limits the maximum number of replacements; otherwise, all instances ofNEEDLEare replaced. -
STRING(NUMBER, FORMAT)
Returns a string corresponding toNUMBERin the format given by format specifierFORMAT. For example,STRING(123.56, F5.1)has the value"123.6". -
STRUNC(STRING, N)
ReturnsSTRING, first trimming it to at mostNbytes, then removing trailing spaces (but not tabs or other white space). Returns an empty string ifNis zero or negative, orSTRINGunchanged ifNis missing. -
SUBSTR(STRING, START)
Returns a string consisting of the value ofSTRINGfrom positionSTARTonward. Returns an empty string ifSTARTis system-missing, less than 1, or greater than the length ofSTRING. -
SUBSTR(STRING, START, COUNT)
Returns a string consisting of the firstCOUNTbytes fromSTRINGbeginning at positionSTART. Returns an empty string ifSTARTorCOUNTis system-missing, ifSTARTis less than 1 or greater than the number of bytes inSTRING, or ifCOUNTis less than 1. Returns a string shorter thanCOUNTbytes ifSTART+COUNT- 1 is greater than the number of bytes inSTRING. Examples:SUBSTR("abcdefg", 3, 2)has value"cd";SUBSTR("nonsense", 4, 10)has the value"sense". -
UPCASE(STRING)
ReturnsSTRING, changing lowercase letters to uppercase letters.
Time and Date Functions
For compatibility, PSPP considers dates before 15 Oct 1582 invalid. Most time and date functions will not accept earlier dates.
Time and Date Representations
Times and dates are handled by PSPP as single numbers. A "time" is an interval. PSPP measures times in seconds. Thus, the following intervals correspond with the numeric values given:
| Interval | Numeric Value |
|---|---|
| 10 minutes | 600 |
| 1 hour | 3,600 |
| 1 day, 3 hours, 10 seconds | 97,210 |
| 40 days | 3,456,000 |
A "date", on the other hand, is a particular instant in the past or the future. PSPP represents a date as a number of seconds since midnight preceding 14 Oct 1582. Because midnight preceding the dates given below correspond with the numeric PSPP dates given:
| Date | Numeric Value |
|---|---|
| 15 Oct 1582 | 86,400 |
| 4 Jul 1776 | 6,113,318,400 |
| 1 Jan 1900 | 10,010,390,400 |
| 1 Oct 1978 | 12,495,427,200 |
| 24 Aug 1995 | 13,028,601,600 |
Constructing Times
These functions take numeric arguments and return numeric values that represent times.
-
TIME.DAYS(NDAYS)
Returns a time corresponding to NDAYS days. -
TIME.HMS(NHOURS, NMINS, NSECS)
Returns a time corresponding to NHOURS hours, NMINS minutes, and NSECS seconds. The arguments may not have mixed signs: if any of them are positive, then none may be negative, and vice versa.
Examining Times
These functions take numeric arguments in PSPP time format and give numeric results.
-
CTIME.DAYS(TIME)
Results in the number of days and fractional days in TIME. -
CTIME.HOURS(TIME)
Results in the number of hours and fractional hours in TIME. -
CTIME.MINUTES(TIME)
Results in the number of minutes and fractional minutes in TIME. -
CTIME.SECONDS(TIME)
Results in the number of seconds and fractional seconds in TIME. (CTIME.SECONDSdoes nothing;CTIME.SECONDS(X)is equivalent toX.)
Constructing Dates
These functions take numeric arguments and give numeric results that represent dates. Arguments taken by these functions are:
-
DAY
Refers to a day of the month between 1 and 31. Day 0 is also accepted and refers to the final day of the previous month. Days 29, 30, and 31 are accepted even in months that have fewer days and refer to a day near the beginning of the following month. -
MONTH
Refers to a month of the year between 1 and 12. Months 0 and 13 are also accepted and refer to the last month of the preceding year and the first month of the following year, respectively. -
QUARTER
Refers to a quarter of the year between 1 and 4. The quarters of the year begin on the first day of months 1, 4, 7, and 10. -
WEEK
Refers to a week of the year between 1 and 53. -
YDAY
Refers to a day of the year between 1 and 366. -
YEAR
Refers to a year, 1582 or greater. Years between 0 and 99 are treated according to the epoch set onSET EPOCH, by default beginning 69 years before the current date.
If these functions' arguments are out-of-range, they are correctly normalized before conversion to date format. Non-integers are rounded toward zero.
-
DATE.DMY(DAY, MONTH, YEAR)
DATE.MDY(MONTH, DAY, YEAR)
Results in a date value corresponding to the midnight before day DAY of month MONTH of year YEAR. -
DATE.MOYR(MONTH, YEAR)
Results in a date value corresponding to the midnight before the first day of month MONTH of year YEAR. -
DATE.QYR(QUARTER, YEAR)
Results in a date value corresponding to the midnight before the first day of quarter QUARTER of year YEAR. -
DATE.WKYR(WEEK, YEAR)
Results in a date value corresponding to the midnight before the first day of week WEEK of year YEAR. -
DATE.YRDAY(YEAR, YDAY)
Results in a date value corresponding to the day YDAY of year YEAR.
Examining Dates
These functions take numeric arguments in PSPP date or time format and give numeric results. These names are used for arguments:
-
DATE
A numeric value in PSPP date format. -
TIME
A numeric value in PSPP time format. -
TIME-OR-DATE
A numeric value in PSPP time or date format.
The functions for examining dates are:
-
XDATE.DATE(TIME-OR-DATE)
For a time, results in the time corresponding to the number of whole days DATE-OR-TIME includes. For a date, results in the date corresponding to the latest midnight at or before DATE-OR-TIME; that is, gives the date that DATE-OR-TIME is in. -
XDATE.HOUR(TIME-OR-DATE)
For a time, results in the number of whole hours beyond the number of whole days represented by DATE-OR-TIME. For a date, results in the hour(as an integer between 0 and 23) corresponding to DATE-OR-TIME. -
XDATE.JDAY(DATE)
Results in the day of the year (as an integer between 1 and 366) corresponding to DATE. -
XDATE.MDAY(DATE)
Results in the day of the month (as an integer between 1 and 31) corresponding to DATE. -
XDATE.MINUTE(TIME-OR-DATE)
Results in the number of minutes (as an integer between 0 and 59) after the last hour in TIME-OR-DATE. -
XDATE.MONTH(DATE)
Results in the month of the year (as an integer between 1 and 12) corresponding to DATE. -
XDATE.QUARTER(DATE)
Results in the quarter of the year (as an integer between 1 and 4) corresponding to DATE. -
XDATE.SECOND(TIME-OR-DATE)
Results in the number of whole seconds after the last whole minute (as an integer between 0 and 59) in TIME-OR-DATE. -
XDATE.TDAY(DATE)
Results in the number of whole days from 14 Oct 1582 to DATE. -
XDATE.TIME(DATE)
Results in the time of day at the instant corresponding to DATE, as a time value. This is the number of seconds since midnight on the day corresponding to DATE. -
XDATE.WEEK(DATE)
Results in the week of the year (as an integer between 1 and 53) corresponding to DATE. -
XDATE.WKDAY(DATE)
Results in the day of week (as an integer between 1 and 7) corresponding to DATE, where 1 represents Sunday. -
XDATE.YEAR(DATE)
Returns the year (as an integer 1582 or greater) corresponding to DATE.
Time and Date Arithmetic
Ordinary arithmetic operations on dates and times often produce sensible results. Adding a time to, or subtracting one from, a date produces a new date that much earlier or later. The difference of two dates yields the time between those dates. Adding two times produces the combined time. Multiplying a time by a scalar produces a time that many times longer. Since times and dates are just numbers, the ordinary addition and subtraction operators are employed for these purposes.
Adding two dates does not produce a useful result.
Dates and times may have very large values. Thus, it is not a good idea to take powers of these values; also, the accuracy of some procedures may be affected. If necessary, convert times or dates in seconds to some other unit, like days or years, before performing analysis.
PSPP supplies a few functions for date arithmetic:
-
DATEDIFF(DATE2, DATE1, UNIT)
Returns the span of time fromDATE1toDATE2in terms ofUNIT, which must be a quoted string, one ofyears,quarters,months,weeks,days,hours,minutes, andseconds. The result is an integer, truncated toward zero.One year is considered to span from a given date to the same month, day, and time of day the next year. Thus, from January 1 of one year to January 1 the next year is considered to be a full year, but February 29 of a leap year to the following February 28 is not. Similarly, one month spans from a given day of the month to the same day of the following month. Thus, there is never a full month from Jan. 31 of a given year to any day in the following February.
-
DATESUM(DATE, QUANTITY, UNIT[, METHOD])
ReturnsDATEadvanced by the givenQUANTITYof the specifiedUNIT, which must be one of the stringsyears,quarters,months,weeks,days,hours,minutes, andseconds.When
UNITisyears,quarters, ormonths, only the integer part ofQUANTITYis considered. Adding one of these units can cause the day of the month to exceed the number of days in the month. In this case, theMETHODcomes into play: if it is omitted or specified asclosest(as a quoted string), then the resulting day is the last day of the month; otherwise, if it is specified asrollover, then the extra days roll over into the following month.When
UNITisweeks,days,hours,minutes, orseconds, theQUANTITYis not rounded to an integer andMETHOD, if specified, is ignored.
Miscellaneous Functions
-
LAG(VARIABLE[, N])
VARIABLEmust be a numeric or string variable name.LAGyields the value of that variable for the caseNbefore the current one. Results in system-missing (for numeric variables) or blanks (for string variables) for the firstNcases.LAGobtains values from the cases that become the new active dataset after a procedure executes. Thus,LAGwill not return values from cases dropped by transformations such asSELECT IF, and transformations likeCOMPUTEthat modify data will change the values returned byLAG. These are both the case whether these transformations precede or follow the use ofLAG.If
LAGis used beforeTEMPORARY, then the values it returns are those in cases just beforeTEMPORARY.LAGmay not be used afterTEMPORARY.If omitted,
Ndefaults to 1. Otherwise,Nmust be a small positive constant integer. There is no explicit limit, but use of a large value will increase memory consumption. -
YRMODA(YEAR, MONTH, DAY)
YEAR is a year, either between 0 and 99 or at least 1582. Unlike other PSPP date functions, years between 0 and 99 always correspond to 1900 through 1999.MONTHis a month between 1 and 13.DAYis a day between 0 and 31. ADAYof 0 refers to the last day of the previous month, and aMONTHof 13 refers to the first month of the next year.YEARmust be in range.YEAR,MONTH, andDAYmust all be integers.YRMODAresults in the number of days between 15 Oct 1582 and the date specified, plus one. The date passed toYRMODAmust be on or after 15 Oct 1582. 15 Oct 1582 has a value of 1. -
VALUELABEL(VARIABLE)
Returns a string matching the label associated with the current value ofVARIABLE. If the current value ofVARIABLEhas no associated label, then this function returns the empty string.VARIABLEmay be a numeric or string variable.
Statistical Distribution Functions
PSPP can calculate several functions of standard statistical distributions. These functions are named systematically based on the function and the distribution. The table below describes the statistical distribution functions in general:
-
PDF.DIST(X[, PARAM...])
Probability density function forDIST. The domain ofXdepends onDIST. For continuous distributions, the result is the density of the probability function at X, and the range is nonnegative real numbers. For discrete distributions, the result is the probability ofX. -
CDF.DIST(X[, PARAM...])
Cumulative distribution function forDIST, that is, the probability that a random variate drawn from the distribution is less thanX. The domain ofXdependsDIST. The result is a probability. -
SIG.DIST(X[, PARAM...)
Tail probability function forDIST, that is, the probability that a random variate drawn from the distribution is greater thanX. The domain ofXdependsDIST. The result is a probability. Only a few distributions include anSIGfunction. -
IDF.DIST(P[, PARAM...])
Inverse distribution function forDIST, the value ofXfor which the CDF would yield P. The value of P is a probability. The range depends onDISTand is identical to the domain for the corresponding CDF. -
RV.DIST([PARAM...])
Random variate function forDIST. The range depends on the distribution. -
NPDF.DIST(X[, PARAM...])
Noncentral probability density function. The result is the density of the given noncentral distribution atX. The domain ofXdepends onDIST. The range is nonnegative real numbers. Only a few distributions include anNPDFfunction. -
NCDF.DIST(X[, PARAM...])
Noncentral cumulative distribution function forDIST, that is, the probability that a random variate drawn from the given noncentral distribution is less thanX. The domain ofXdependsDIST. The result is a probability. Only a few distributions include an NCDF function.
Continuous Distributions
The following continuous distributions are available:
-
PDF.BETA(X)
CDF.BETA(X, A, B)
IDF.BETA(P, A, B)
RV.BETA(A, B)
NPDF.BETA(X, A, B, )
NCDF.BETA(X, A, B, )
Beta distribution with shape parametersAandB. The noncentral distribution takes an additional parameter . Constraints:A > 0, B > 0, >= 0, 0 <= X <= 1, 0 <= P <= 1. -
PDF.BVNOR(X0, X1, ρ)
CDF.BVNOR(X0, X1, ρ)
Bivariate normal distribution of two standard normal variables with correlation coefficient ρ. Two variates X0 and X1 must be provided. Constraints: 0 <= ρ <= 1, 0 <= P <= 1. -
PDF.CAUCHY(X, A, B)
CDF.CAUCHY(X, A, B)
IDF.CAUCHY(P, A, B)
RV.CAUCHY(A, B)
Cauchy distribution with location parameterAand scale parameterB. Constraints: B > 0, 0 < P < 1. -
CDF.CHISQ(X, DF)
SIG.CHISQ(X, DF)
IDF.CHISQ(P, DF)
RV.CHISQ(DF)
NCDF.CHISQ(X, DF, )
Chi-squared distribution with DF degrees of freedom. The noncentral distribution takes an additional parameter . Constraints: DF > 0, > 0, X >= 0, 0 <= P < 1. -
PDF.EXP(X, A)
CDF.EXP(X, A)
IDF.EXP(P, A)
RV.EXP(A)
Exponential distribution with scale parameterA. The inverse ofArepresents the rate of decay. Constraints: A > 0, X >= 0, 0 <= P < 1. -
PDF.XPOWER(X, A, B)
RV.XPOWER(A, B)
Exponential power distribution with positive scale parameterAand nonnegative power parameterB. Constraints: A > 0, B >= 0, X >= 0, 0 <= P <= 1. This distribution is a PSPP extension. -
PDF.F(X, DF1, DF2)
CDF.F(X, DF1, DF2)
SIG.F(X, DF1, DF2)
IDF.F(P, DF1, DF2)
RV.F(DF1, DF2)
F-distribution of two chi-squared deviates with DF1 and DF2 degrees of freedom. The noncentral distribution takes an additional parameter . Constraints: DF1 > 0, DF2 > 0, >= 0, X >= 0, 0 <= P < 1. -
PDF.GAMMA(X, A, B)
CDF.GAMMA(X, A, B)
IDF.GAMMA(P, A, B)
RV.GAMMA(A, B)
Gamma distribution with shape parameterAand scale parameterB. Constraints: A > 0, B > 0, X >= 0, 0 <= P < 1. -
PDF.LANDAU(X)
RV.LANDAU()
Landau distribution. -
PDF.LAPLACE(X, A, B)
CDF.LAPLACE(X, A, B)
IDF.LAPLACE(P, A, B)
RV.LAPLACE(A, B)
Laplace distribution with location parameterAand scale parameterB. Constraints: B > 0, 0 < P < 1. -
RV.LEVY(C, ɑ)
Levy symmetric alpha-stable distribution with scale C and exponent ɑ. Constraints: 0 < ɑ <= 2. -
RV.LVSKEW(C, ɑ, β)
Levy skew alpha-stable distribution with scale C, exponent ɑ, and skewness parameter β. Constraints: 0 < ɑ <= 2, -1 <= β <= 1. -
PDF.LOGISTIC(X, A, B)
CDF.LOGISTIC(X, A, B)
IDF.LOGISTIC(P, A, B)
RV.LOGISTIC(A, B)
Logistic distribution with location parameterAand scale parameterB. Constraints: B > 0, 0 < P < 1. -
PDF.LNORMAL(X, A, B)
CDF.LNORMAL(X, A, B)
IDF.LNORMAL(P, A, B)
RV.LNORMAL(A, B)
Lognormal distribution with parametersAandB. Constraints: A > 0, B > 0, X >= 0, 0 <= P < 1. -
PDF.NORMAL(X, μ, σ)
CDF.NORMAL(X, μ, σ)
IDF.NORMAL(P, μ, σ)
RV.NORMAL(μ, σ)
Normal distribution with mean μ and standard deviation σ. Constraints: B > 0, 0 < P < 1. Three additional functions are available as shorthand:-
CDFNORM(X)
Equivalent toCDF.NORMAL(X, 0, 1). -
PROBIT(P)
Equivalent toIDF.NORMAL(P, 0, 1). -
NORMAL(σ)
Equivalent toRV.NORMAL(0, σ).
-
-
PDF.NTAIL(X, A, σ)
RV.NTAIL(A, σ)
Normal tail distribution with lower limitAand standard deviationσ. This distribution is a PSPP extension. Constraints: A > 0, X > A, 0 < P < 1. -
PDF.PARETO(X, A, B)
CDF.PARETO(X, A, B)
IDF.PARETO(P, A, B)
RV.PARETO(A, B)
Pareto distribution with threshold parameterAand shape parameterB. Constraints: A > 0, B > 0, X >= A, 0 <= P < 1. -
PDF.RAYLEIGH(X, σ)
CDF.RAYLEIGH(X, σ)
IDF.RAYLEIGH(P, σ)
RV.RAYLEIGH(σ)
Rayleigh distribution with scale parameter σ. This distribution is a PSPP extension. Constraints: σ > 0, X > 0. -
PDF.RTAIL(X, A, σ)
RV.RTAIL(A, σ)
Rayleigh tail distribution with lower limitAand scale parameterσ. This distribution is a PSPP extension. Constraints: A > 0, σ > 0, X > A. -
PDF.T(X, DF)
CDF.T(X, DF)
IDF.T(P, DF)
RV.T(DF)
T-distribution with DF degrees of freedom. The noncentral distribution takes an additional parameter . Constraints: DF > 0, 0 < P < 1. -
PDF.T1G(X, A, B)
CDF.T1G(X, A, B)
IDF.T1G(P, A, B)
Type-1 Gumbel distribution with parametersAandB. This distribution is a PSPP extension. Constraints: 0 < P < 1. -
PDF.T2G(X, A, B)
CDF.T2G(X, A, B)
IDF.T2G(P, A, B)
Type-2 Gumbel distribution with parametersAandB. This distribution is a PSPP extension. Constraints: X > 0, 0 < P < 1. -
PDF.UNIFORM(X, A, B)
CDF.UNIFORM(X, A, B)
IDF.UNIFORM(P, A, B)
RV.UNIFORM(A, B)
Uniform distribution with parametersAandB. Constraints: A <= X <= B, 0 <= P <= 1. An additional function is available as shorthand:UNIFORM(B)
Equivalent toRV.UNIFORM(0, B).
-
PDF.WEIBULL(X, A, B)
CDF.WEIBULL(X, A, B)
IDF.WEIBULL(P, A, B)
RV.WEIBULL(A, B)
Weibull distribution with parametersAandB. Constraints: A > 0, B > 0, X >= 0, 0 <= P < 1.
Discrete Distributions
The following discrete distributions are available:
-
PDF.BERNOULLI(X)
CDF.BERNOULLI(X, P)
RV.BERNOULLI(P)
Bernoulli distribution with probability of success P. Constraints: X = 0 or 1, 0 <= P <= 1. -
PDF.BINOM(X, N, P)
CDF.BINOM(X, N, P)
RV.BINOM(N, P)
Binomial distribution with N trials and probability of success P. Constraints: integer N > 0, 0 <= P <= 1, integer X <= N. -
PDF.GEOM(X, N, P)
CDF.GEOM(X, N, P)
RV.GEOM(N, P)
Geometric distribution with probability of success P. Constraints: 0 <= P <= 1, integer X > 0. -
PDF.HYPER(X, A, B, C)
CDF.HYPER(X, A, B, C)
RV.HYPER(A, B, C)
Hypergeometric distribution whenBobjects out ofAare drawn andCof the available objects are distinctive. Constraints: integer A > 0, integer B <= A, integer C <= A, integer X >= 0. -
PDF.LOG(X, P)
RV.LOG(P)
Logarithmic distribution with probability parameter P. Constraints: 0 <= P < 1, X >= 1. -
PDF.NEGBIN(X, N, P)
CDF.NEGBIN(X, N, P)
RV.NEGBIN(N, P)
Negative binomial distribution with number of successes parameter N and probability of success parameter P. Constraints: integer N >= 0, 0 < P <= 1, integer X >= 1. -
PDF.POISSON(X, μ)
CDF.POISSON(X, μ)
RV.POISSON(μ)
Poisson distribution with mean μ. Constraints: μ > 0, integer X >= 0.
System Variables
The system variables described below may be used only in expressions.
-
$CASENUM
Case number of the case being processed. This changes as cases are added, deleted, and reordered. -
$DATE
Date the PSPP process was started, in formatA9, following the patternDD-MMM-YY. -
$DATE11
Date the PSPP process was started, in formatA11, following the patternDD-MMM-YYYY. -
$JDATE
Number of days between 15 Oct 1582 and the time the PSPP process was started. -
$LENGTH
Page length, in lines, in formatF11. -
$SYSMIS
System missing value, in formatF1. -
$TIME
Number of seconds between midnight 14 Oct 1582 and the time the active dataset was read, in formatF20. -
$WIDTH
Page width, in characters, in formatF3.
Data Input and Output
Data are the focus of the PSPP language. Each datum belongs to a “case” (also called an “observation”). Each case represents an individual or "experimental unit". For example, in the results of a survey, the names of the respondents, their sex, age, etc. and their responses are all data and the data pertaining to single respondent is a case. This chapter examines the PSPP commands for defining variables and reading and writing data. There are alternative commands to read data from predefined sources such as system files or databases.
These commands tell PSPP how to read data, but the data will not actually be read until a procedure is executed.
BEGIN DATA…END DATA
BEGIN DATA.
...
END DATA.
BEGIN DATA and END DATA can be used to embed raw ASCII data in a
PSPP syntax file. DATA LIST or another input
procedure must be used before BEGIN DATA. BEGIN DATA and END DATA must be used together. END DATA must appear by itself on a
single line, with no leading white space and exactly one space between
the words END and DATA.
CLOSE FILE HANDLE
CLOSE FILE HANDLE HANDLE_NAME.
CLOSE FILE HANDLE disassociates the name of a file
handle with a given file. The
only specification is the name of the handle to close. Afterward
FILE HANDLE.
The file named INLINE, which represents data entered between BEGIN DATA and END DATA, cannot be closed. Attempts to close it with
CLOSE FILE HANDLE have no effect.
CLOSE FILE HANDLE is a PSPP extension.
DATAFILE ATTRIBUTE
DATAFILE ATTRIBUTE
ATTRIBUTE=NAME('VALUE') [NAME('VALUE')]...
ATTRIBUTE=NAME[INDEX]('VALUE') [NAME[INDEX]('VALUE')]...
DELETE=NAME [NAME]...
DELETE=NAME[INDEX] [NAME[INDEX]]...
DATAFILE ATTRIBUTE adds, modifies, or removes user-defined
attributes associated with the active dataset. Custom data file
attributes are not interpreted by PSPP, but they are saved as part of
system files and may be used by other software that reads them.
Use the ATTRIBUTE subcommand to add or modify a custom data file
attribute. Specify the name of the attribute, followed by the desired
value, in parentheses, as a quoted string. Attribute names that begin
with $ are reserved for PSPP's internal use, and attribute names
that begin with @ or $@ are not displayed by most PSPP commands
that display other attributes. Other attribute names are not treated
specially.
Attributes may also be organized into arrays. To assign to an array
element, add an integer array index enclosed in square brackets ([ and
]) between the attribute name and value. Array indexes start at 1,
not 0. An attribute array that has a single element (number 1) is not
distinguished from a non-array attribute.
Use the DELETE subcommand to delete an attribute. Specify an
attribute name by itself to delete an entire attribute, including all
array elements for attribute arrays. Specify an attribute name followed
by an array index in square brackets to delete a single element of an
attribute array. In the latter case, all the array elements numbered
higher than the deleted element are shifted down, filling the vacated
position.
To associate custom attributes with particular variables, instead
of with the entire active dataset, use VARIABLE ATTRIBUTE instead.
DATAFILE ATTRIBUTE takes effect immediately. It is not affected by
conditional and looping structures such as DO IF or LOOP.
DATASET commands
DATASET NAME NAME [WINDOW={ASIS,FRONT}].
DATASET ACTIVATE NAME [WINDOW={ASIS,FRONT}].
DATASET COPY NAME [WINDOW={MINIMIZED,HIDDEN,FRONT}].
DATASET DECLARE NAME [WINDOW={MINIMIZED,HIDDEN,FRONT}].
DATASET CLOSE {NAME,*,ALL}.
DATASET DISPLAY.
The DATASET commands simplify use of multiple datasets within a
PSPP session. They allow datasets to be created and destroyed. At any
given time, most PSPP commands work with a single dataset, called the
active dataset.
The DATASET NAME command gives the active dataset the specified name,
or if it already had a name, it renames it. If another dataset already
had the given name, that dataset is deleted.
The DATASET ACTIVATE command selects the named dataset, which must
already exist, as the active dataset. Before switching the active
dataset, any pending transformations are executed, as if EXECUTE had
been specified. If the active dataset is unnamed before switching, then
it is deleted and becomes unavailable after switching.
The DATASET COPY command creates a new dataset with the specified
name, whose contents are a copy of the active dataset. Any pending
transformations are executed, as if EXECUTE had been specified, before
making the copy. If a dataset with the given name already exists, it is
replaced. If the name is the name of the active dataset, then the
active dataset becomes unnamed.
The DATASET DECLARE command creates a new dataset that is
initially "empty," that is, it has no dictionary or data. If a
dataset with the given name already exists, this has no effect. The
new dataset can be used with commands that support output to a
dataset, such as. AGGREGATE.
The DATASET CLOSE command deletes a dataset. If the active dataset
is specified by name, or if * is specified, then the active dataset
becomes unnamed. If a different dataset is specified by name, then it
is deleted and becomes unavailable. Specifying ALL deletes all datasets
except for the active dataset, which becomes unnamed.
The DATASET DISPLAY command lists all the currently defined datasets.
Many DATASET commands accept an optional WINDOW subcommand. In the
PSPPIRE GUI, the value given for this subcommand influences how the
dataset's window is displayed. Outside the GUI, the WINDOW subcommand
has no effect. The valid values are:
-
ASIS
Do not change how the window is displayed. This is the default forDATASET NAMEandDATASET ACTIVATE. -
FRONT
Raise the dataset's window to the top. Make it the default dataset for running syntax. -
MINIMIZED
Display the window "minimized" to an icon. Prefer other datasets for running syntax. This is the default forDATASET COPYandDATASET DECLARE. -
HIDDEN
Hide the dataset's window. Prefer other datasets for running syntax.
DATA LIST
Used to read text or binary data, DATA LIST is the most fundamental
data-reading command. Even the more sophisticated input methods use
DATA LIST commands as a building block. Understanding DATA LIST is
important to understanding how to use PSPP to read your data files.
There are two major variants of DATA LIST, which are fixed format
and free format. In addition, free format has a minor variant, list
format, which is discussed in terms of its differences from vanilla free
format.
Each form of DATA LIST is described in detail below.
See GET DATA for a command that offers a few
enhancements over DATA LIST and that may be substituted for DATA LIST
in many situations.
DATA LIST FIXED
DATA LIST [FIXED]
{TABLE,NOTABLE}
[FILE='FILE_NAME' [ENCODING='ENCODING']]
[RECORDS=RECORD_COUNT]
[END=END_VAR]
[SKIP=RECORD_COUNT]
/[line_no] VAR_SPEC...
where each VAR_SPEC takes one of the forms
VAR_LIST START-END [TYPE_SPEC]
VAR_LIST (FORTRAN_SPEC)
DATA LIST FIXED is used to read data files that have values at
fixed positions on each line of single-line or multiline records. The
keyword FIXED is optional.
The FILE subcommand must be used if input is to be taken from an
external file. It may be used to specify a file name as a string or a
file handle. If the FILE
subcommand is not used, then input is assumed to be specified within
the command file using BEGIN DATA...END DATA.
The ENCODING subcommand may only be used if the FILE subcommand is
also used. It specifies the character encoding of the file. See
INSERT, for information on supported encodings.
The optional RECORDS subcommand, which takes a single integer as an
argument, is used to specify the number of lines per record. If
RECORDS is not specified, then the number of lines per record is
calculated from the list of variable specifications later in DATA LIST.
The END subcommand is only useful in conjunction with INPUT PROGRAM.
The optional SKIP subcommand specifies a number of records to skip
at the beginning of an input file. It can be used to skip over a row
that contains variable names, for example.
DATA LIST can optionally output a table describing how the data
file is read. The TABLE subcommand enables this output, and NOTABLE
disables it. The default is to output the table.
The list of variables to be read from the data list must come last.
Each line in the data record is introduced by a slash (/).
Optionally, a line number may follow the slash. Following, any number
of variable specifications may be present.
Each variable specification consists of a list of variable names
followed by a description of their location on the input line. Sets
of variables may be with
TO, e.g. VAR1 TO VAR5. There are two ways to specify the location
of the variable on the line: columnar style and FORTRAN style.
In columnar style, the starting column and ending column for the
field are specified after the variable name, separated by a dash
(-). For instance, the third through fifth columns on a line would
be specified 3-5. By default, variables are considered to be in
F format. (Use SET FORMAT to change the default.)
In columnar style, to use a variable format other than the default,
specify the format type in parentheses after the column numbers. For
instance, for alphanumeric A format, use (A).
In addition, implied decimal places can be specified in parentheses
after the column numbers. As an example, suppose that a data file has a
field in which the characters 1234 should be interpreted as having the
value 12.34. Then this field has two implied decimal places, and the
corresponding specification would be (2). If a field that has implied
decimal places contains a decimal point, then the implied decimal places
are not applied.
Changing the variable format and adding implied decimal places can be
done together; for instance, (N,5).
When using columnar style, the input and output width of each variable is computed from the field width. The field width must be evenly divisible into the number of variables specified.
FORTRAN style is an altogether different approach to specifying field locations. With this approach, a list of variable input format specifications, separated by commas, are placed after the variable names inside parentheses. Each format specifier advances as many characters into the input line as it uses.
Implied decimal places also exist in FORTRAN style. A format
specification with D decimal places also has D implied decimal places.
In addition to the standard formats, FORTRAN style defines some extensions:
-
X
Advance the current column on this line by one character position. -
T<X>
Set the current column on this line to column<X>, with column numbers considered to begin with 1 at the left margin. -
NEWREC<X>
Skip forward<X>lines in the current record, resetting the active column to the left margin. -
Repeat count
Any format specifier may be preceded by a number. This causes the action of that format specifier to be repeated the specified number of times. -
(SPEC1, ..., SPECN)
Use()to group specifiers together. This is most useful when preceded by a repeat count. Groups may be nested.FORTRAN and columnar styles may be freely intermixed. Columnar style leaves the active column immediately after the ending column specified. Record motion using
NEWRECin FORTRAN style also applies to later FORTRAN and columnar specifiers.
Example 1
DATA LIST TABLE /NAME 1-10 (A) INFO1 TO INFO3 12-17 (1).
BEGIN DATA.
John Smith 102311
Bob Arnold 122015
Bill Yates 918 6
END DATA.
Defines the following variables:
-
NAME, a 10-character-wide string variable, in columns 1 through 10. -
INFO1, a numeric variable, in columns 12 through 13. -
INFO2, a numeric variable, in columns 14 through 15. -
INFO3, a numeric variable, in columns 16 through 17.
The BEGIN DATA/END DATA commands cause three cases to be
defined:
| Case | NAME | INFO1 | INFO2 | INFO3 |
|---|---|---|---|---|
| 1 | John Smith | 10 | 23 | 11 |
| 2 | Bob Arnold | 12 | 20 | 15 |
| 3 | Bill Yates | 9 | 18 | 6 |
The TABLE keyword causes PSPP to print out a table describing the
four variables defined.
Example 2
DATA LIST FILE="survey.dat"
/ID 1-5 NAME 7-36 (A) SURNAME 38-67 (A) MINITIAL 69 (A)
/Q01 TO Q50 7-56
/.
Defines the following variables:
-
ID, a numeric variable, in columns 1-5 of the first record. -
NAME, a 30-character string variable, in columns 7-36 of the first record. -
SURNAME, a 30-character string variable, in columns 38-67 of the first record. -
MINITIAL, a 1-character string variable, in column 69 of the first record. -
Fifty variables
Q01,Q02,Q03, ...,Q49,Q50, all numeric,Q01in column 7,Q02in column 8, ...,Q49in column 55,Q50in column 56, all in the second record.
Cases are separated by a blank record.
Data is read from file survey.dat in the current directory.
DATA LIST FREE
DATA LIST FREE
[({TAB,'C'}, ...)]
[{NOTABLE,TABLE}]
[FILE='FILE_NAME' [ENCODING='ENCODING']]
[SKIP=N_RECORDS]
/VAR_SPEC...
where each VAR_SPEC takes one of the forms
VAR_LIST [(TYPE_SPEC)]
VAR_LIST *
In free format, the input data is, by default, structured as a
series of fields separated by spaces, tabs, or line breaks. If the
current DECIMAL separator is DOT, then commas
are also treated as field separators. Each field's content may be
unquoted, or it may be quoted with a pairs of apostrophes (') or
double quotes ("). Unquoted white space separates fields but is not
part of any field. Any mix of spaces, tabs, and line breaks is
equivalent to a single space for the purpose of separating fields, but
consecutive commas will skip a field.
Alternatively, delimiters can be specified explicitly, as a
parenthesized, comma-separated list of single-character strings
immediately following FREE. The word TAB may also be used to
specify a tab character as a delimiter. When delimiters are specified
explicitly, only the given characters, plus line breaks, separate
fields. Furthermore, leading spaces at the beginnings of fields are
not trimmed, consecutive delimiters define empty fields, and no form
of quoting is allowed.
The NOTABLE and TABLE subcommands are as in DATA LIST FIXED
above. NOTABLE is the default.
The FILE, SKIP, and ENCODING subcommands are as in DATA LIST FIXED above.
The variables to be parsed are given as a single list of variable
names. This list must be introduced by a single slash (/). The set
of variable names may contain format
specifications in
parentheses. Format specifications apply to all variables back to the
previous parenthesized format specification.
An asterisk on its own has the same effect as (F8.0), assigning
the variables preceding it input/output format F8.0.
Specified field widths are ignored on input, although all normal limits on field width apply, but they are honored on output.
DATA LIST LIST
DATA LIST LIST
[({TAB,'C'}, ...)]
[{NOTABLE,TABLE}]
[FILE='FILE_NAME' [ENCODING='ENCODING']]
[SKIP=RECORD_COUNT]
/VAR_SPEC...
where each VAR_SPEC takes one of the forms
VAR_LIST [(TYPE_SPEC)]
VAR_LIST *
With one exception, DATA LIST LIST is syntactically and
semantically equivalent to DATA LIST FREE. The exception is that each
input line is expected to correspond to exactly one input record. If
more or fewer fields are found on an input line than expected, an
appropriate diagnostic is issued.
END CASE
END CASE.
END CASE is used only within INPUT PROGRAM to
output the current case.
END FILE
END FILE.
END FILE is used only within INPUT PROGRAM to
terminate the current input program.
FILE HANDLE
Syntax Overview
For text files:
FILE HANDLE HANDLE_NAME
/NAME='FILE_NAME
[/MODE=CHARACTER]
[/ENDS={CR,CRLF}]
/TABWIDTH=TAB_WIDTH
[ENCODING='ENCODING']
For binary files in native encoding with fixed-length records:
FILE HANDLE HANDLE_NAME
/NAME='FILE_NAME'
/MODE=IMAGE
[/LRECL=REC_LEN]
[ENCODING='ENCODING']
For binary files in native encoding with variable-length records:
FILE HANDLE HANDLE_NAME
/NAME='FILE_NAME'
/MODE=BINARY
[/LRECL=REC_LEN]
[ENCODING='ENCODING']
For binary files encoded in EBCDIC:
FILE HANDLE HANDLE_NAME
/NAME='FILE_NAME'
/MODE=360
/RECFORM={FIXED,VARIABLE,SPANNED}
[/LRECL=REC_LEN]
[ENCODING='ENCODING']
Details
Use FILE HANDLE to associate a file handle name with a file and its
attributes, so that later commands can refer to the file by its handle
name. Names of text files can be specified directly on commands that
access files, so that FILE HANDLE is only needed when a file is not an
ordinary file containing lines of text. However, FILE HANDLE may be
used even for text files, and it may be easier to specify a file's name
once and later refer to it by an abstract handle.
Specify the file handle name as the identifier immediately following
the FILE HANDLE command name. The identifier INLINE is reserved
for representing data embedded in the syntax file (see BEGIN
DATA). The file handle name must not already have been
used in a previous invocation of FILE HANDLE, unless it has been
closed with CLOSE FILE HANDLE.
The effect and syntax of FILE HANDLE depends on the selected MODE:
-
In
CHARACTERmode, the default, the data file is read as a text file. Each text line is read as one record.In
CHARACTERmode only, tabs are expanded to spaces by input programs, except byDATA LIST FREEwith explicitly specified delimiters. Each tab is 4 characters wide by default, butTABWIDTH(a PSPP extension) may be used to specify an alternate width. Use aTABWIDTHof 0 to suppress tab expansion.A file written in
CHARACTERmode by default uses the line ends of the system on which PSPP is running, that is, on Windows, the default is CR LF line ends, and on other systems the default is LF only. SpecifyENDSasCRorCRLFto override the default. PSPP reads files using either convention on any kind of system, regardless ofENDS. -
In
IMAGEmode, the data file is treated as a series of fixed-length binary records.LRECLshould be used to specify the record length in bytes, with a default of 1024. On input, it is an error if anIMAGEfile's length is not a integer multiple of the record length. On output, each record is padded with spaces or truncated, if necessary, to make it exactly the correct length. -
In
BINARYmode, the data file is treated as a series of variable-length binary records.LRECLmay be specified, but its value is ignored. The data for each record is both preceded and followed by a 32-bit signed integer in little-endian byte order that specifies the length of the record. (This redundancy permits records in these files to be efficiently read in reverse order, although PSPP always reads them in forward order.) The length does not include either integer. -
Mode
360reads and writes files in formats first used for tapes in the 1960s on IBM mainframe operating systems and still supported today by the modern successors of those operating systems. For more information, seeOS/400 Tape and Diskette Device Programming, available on IBM's website.Alphanumeric data in mode
360files are encoded in EBCDIC. PSPP translates EBCDIC to or from the host's native format as necessary on input or output, using an ASCII/EBCDIC translation that is one-to-one, so that a "round trip" from ASCII to EBCDIC back to ASCII, or vice versa, always yields exactly the original data.The
RECFORMsubcommand is required in mode360. The precise file format depends on its setting:-
F
FIXED
This record format is equivalent toIMAGEmode, except for EBCDIC translation.IBM documentation calls this
*F(fixed-length, deblocked) format. -
V
VARIABLE
The file comprises a sequence of zero or more variable-length blocks. Each block begins with a 4-byte "block descriptor word" (BDW). The first two bytes of the BDW are an unsigned integer in big-endian byte order that specifies the length of the block, including the BDW itself. The other two bytes of the BDW are ignored on input and written as zeros on output.Following the BDW, the remainder of each block is a sequence of one or more variable-length records, each of which in turn begins with a 4-byte "record descriptor word" (RDW) that has the same format as the BDW. Following the RDW, the remainder of each record is the record data.
The maximum length of a record in
VARIABLEmode is 65,527 bytes: 65,535 bytes (the maximum value of a 16-bit unsigned integer), minus 4 bytes for the BDW, minus 4 bytes for the RDW.In mode
VARIABLE,LRECLspecifies a maximum, not a fixed, record length, in bytes. The default is 8,192.IBM documentation calls this
*VB(variable-length, blocked, unspanned) format. -
VS
SPANNED
This format is likeVARIABLE, except that logical records may be split among multiple physical records (called "segments") or blocks. InSPANNEDmode, the third byte of each RDW is called the segment control character (SCC). Odd SCC values cause the segment to be appended to a record buffer maintained in memory; even values also append the segment and then flush its contents to the input procedure. Canonically, SCC value 0 designates a record not spanned among multiple segments, and values 1 through 3 designate the first segment, the last segment, or an intermediate segment, respectively, within a multi-segment record. The record buffer is also flushed at end of file regardless of the final record's SCC.The maximum length of a logical record in
VARIABLEmode is limited only by memory available to PSPP. Segments are limited to 65,527 bytes, as inVARIABLEmode.This format is similar to what IBM documentation call
*VS(variable-length, deblocked, spanned) format.
In mode
360, fields of typeAthat extend beyond the end of a record read from disk are padded with spaces in the host's native character set, which are then translated from EBCDIC to the native character set. Thus, when the host's native character set is based on ASCII, these fields are effectively padded with characterX'80'. This wart is implemented for compatibility. -
The NAME subcommand specifies the name of the file associated with
the handle. It is required in all modes but SCRATCH mode, in which its
use is forbidden.
The ENCODING subcommand specifies the encoding of text in the
file. For reading text files in CHARACTER mode, all of the forms
described for ENCODING on the INSERT command are
supported. For reading in other file-based modes, encoding
autodetection is not supported; if the specified encoding requests
autodetection then the default encoding is used. This is also true
when a file handle is used for writing a file in any mode.
INPUT PROGRAM…END INPUT PROGRAM
INPUT PROGRAM.
... input commands ...
END INPUT PROGRAM.
INPUT PROGRAM...END INPUT PROGRAM specifies a complex input
program. By placing data input commands within INPUT PROGRAM, PSPP
programs can take advantage of more complex file structures than
available with only DATA LIST.
The first sort of extended input program is to simply put multiple
DATA LIST commands within the INPUT PROGRAM. This will cause all of
the data files to be read in parallel. Input will stop when end of file
is reached on any of the data files.
Transformations, such as conditional and looping constructs, can also
be included within INPUT PROGRAM. These can be used to combine input
from several data files in more complex ways. However, input will still
stop when end of file is reached on any of the data files.
To prevent INPUT PROGRAM from terminating at the first end of
file, use the END subcommand on DATA LIST. This subcommand takes
a variable name, which should be a numeric scratch
variable. (It need not
variable](../language/datasets/scratch-variables.md). (It need not
be a scratch variable but otherwise the results can be surprising.)
The value of this variable is set to 0 when reading the data file, or
1 when end of file is encountered.
Two additional commands are useful in conjunction with INPUT PROGRAM. END CASE is the first. Normally each loop through the
INPUT PROGRAM structure produces one case. END CASE controls
exactly when cases are output. When END CASE is used, looping from
the end of INPUT PROGRAM to the beginning does not cause a case to be
output.
END FILE is the second. When the END subcommand is used on DATA LIST, there is no way for the INPUT PROGRAM construct to stop
looping, so an infinite loop results. END FILE, when executed, stops
the flow of input data and passes out of the INPUT PROGRAM structure.
INPUT PROGRAM must contain at least one DATA LIST or END FILE
command.
Example 1: Read two files in parallel to the end of the shorter
The following example reads variable X from file a.txt and
variable Y from file b.txt. If one file is shorter than the other
then the extra data in the longer file is ignored.
INPUT PROGRAM.
DATA LIST NOTABLE FILE='a.txt'/X 1-10.
DATA LIST NOTABLE FILE='b.txt'/Y 1-10.
END INPUT PROGRAM.
LIST.
Example 2: Read two files in parallel, supplementing the shorter
The following example also reads variable X from a.txt and
variable Y from b.txt. If one file is shorter than the other then
it continues reading the longer to its end, setting the other variable
to system-missing.
INPUT PROGRAM.
NUMERIC #A #B.
DO IF NOT #A.
DATA LIST NOTABLE END=#A FILE='a.txt'/X 1-10.
END IF.
DO IF NOT #B.
DATA LIST NOTABLE END=#B FILE='b.txt'/Y 1-10.
END IF.
DO IF #A AND #B.
END FILE.
END IF.
END CASE.
END INPUT PROGRAM.
LIST.
Example 3: Concatenate two files (version 1)
The following example reads data from file a.txt, then from b.txt,
and concatenates them into a single active dataset.
INPUT PROGRAM.
NUMERIC #A #B.
DO IF #A.
DATA LIST NOTABLE END=#B FILE='b.txt'/X 1-10.
DO IF #B.
END FILE.
ELSE.
END CASE.
END IF.
ELSE.
DATA LIST NOTABLE END=#A FILE='a.txt'/X 1-10.
DO IF NOT #A.
END CASE.
END IF.
END IF.
END INPUT PROGRAM.
LIST.
Example 4: Concatenate two files (version 2)
This is another way to do the same thing as Example 3.
INPUT PROGRAM.
NUMERIC #EOF.
LOOP IF NOT #EOF.
DATA LIST NOTABLE END=#EOF FILE='a.txt'/X 1-10.
DO IF NOT #EOF.
END CASE.
END IF.
END LOOP.
COMPUTE #EOF = 0.
LOOP IF NOT #EOF.
DATA LIST NOTABLE END=#EOF FILE='b.txt'/X 1-10.
DO IF NOT #EOF.
END CASE.
END IF.
END LOOP.
END FILE.
END INPUT PROGRAM.
LIST.
Example 5: Generate random variates
The follows example creates a dataset that consists of 50 random variates between 0 and 10.
INPUT PROGRAM.
LOOP #I=1 TO 50.
COMPUTE X=UNIFORM(10).
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
LIST /FORMAT=NUMBERED.
LIST
LIST
/VARIABLES=VAR_LIST
/CASES=FROM START_INDEX TO END_INDEX BY INCR_INDEX
/FORMAT={UNNUMBERED,NUMBERED} {WRAP,SINGLE}
The LIST procedure prints the values of specified variables to the
listing file.
The VARIABLES subcommand specifies the variables whose values are
to be printed. Keyword VARIABLES is optional. If the VARIABLES
subcommand is omitted then all variables in the active dataset are
printed.
The CASES subcommand can be used to specify a subset of cases to be
printed. Specify FROM and the case number of the first case to print,
TO and the case number of the last case to print, and BY and the
number of cases to advance between printing cases, or any subset of
those settings. If CASES is not specified then all cases are printed.
The FORMAT subcommand can be used to change the output format.
NUMBERED will print case numbers along with each case; UNNUMBERED,
the default, causes the case numbers to be omitted. The WRAP and
SINGLE settings are currently not used.
Case numbers start from 1. They are counted after all transformations have been considered.
LIST is a procedure. It causes the data to be read.
NEW FILE
NEW FILE.
The NEW FILE command clears the dictionary and data from the current
active dataset.
PRINT
[OUTFILE='FILE_NAME']
[RECORDS=N_LINES]
[{NOTABLE,TABLE}]
[ENCODING='ENCODING']
[/[LINE_NO] ARG...]
ARG takes one of the following forms:
'STRING' [START]
VAR_LIST START-END [TYPE_SPEC]
VAR_LIST (FORTRAN_SPEC)
VAR_LIST *
The PRINT transformation writes variable data to the listing file
or an output file. PRINT is executed when a procedure causes the
data to be read. Follow PRINT by
EXECUTE to print variable data without
invoking a procedure.
All PRINT subcommands are optional. If no strings or variables are
specified, PRINT outputs a single blank line.
The OUTFILE subcommand specifies the file to receive the output.
The file may be a file name as a string or a file
handle. If OUTFILE is not
present then output is sent to PSPP's output listing file. When
OUTFILE is present, the output is written to the file in a plain
text format, with a space inserted at beginning of each output line,
even lines that otherwise would be blank.
The ENCODING subcommand may only be used if the OUTFILE
subcommand is also used. It specifies the character encoding of the
file. See INSERT, for information on supported
encodings.
The RECORDS subcommand specifies the number of lines to be output.
The number of lines may optionally be surrounded by parentheses.
TABLE will cause the PRINT command to output a table to the
listing file that describes what it will print to the output file.
NOTABLE, the default, suppresses this output table.
Introduce the strings and variables to be printed with a slash (/).
Optionally, the slash may be followed by a number indicating which
output line is specified. In the absence of this line number, the next
line number is specified. Multiple lines may be specified using
multiple slashes with the intended output for a line following its
respective slash.
Literal strings may be printed. Specify the string itself. Optionally the string may be followed by a column number, specifying the column on the line where the string should start. Otherwise, the string is printed at the current position on the line.
Variables to be printed can be specified in the same ways as
available for DATA LIST FIXED. In addition,
a variable list may be followed by an asterisk (*), which indicates
that the variables should be printed in their dictionary print formats,
separated by spaces. A variable list followed by a slash or the end of
command is interpreted in the same way.
If a FORTRAN type specification is used to move backwards on the current line, then text is written at that point on the line, the line is truncated to that length, although additional text being added will again extend the line to that length.
PRINT EJECT
PRINT EJECT
OUTFILE='FILE_NAME'
RECORDS=N_LINES
{NOTABLE,TABLE}
/[LINE_NO] ARG...
ARG takes one of the following forms:
'STRING' [START-END]
VAR_LIST START-END [TYPE_SPEC]
VAR_LIST (FORTRAN_SPEC)
VAR_LIST *
PRINT EJECT advances to the beginning of a new output page in the
listing file or output file. It can also output data in the same way as
PRINT.
All PRINT EJECT subcommands are optional.
Without OUTFILE, PRINT EJECT ejects the current page in the
listing file, then it produces other output, if any is specified.
With OUTFILE, PRINT EJECT writes its output to the specified
file. The first line of output is written with 1 inserted in the
first column. Commonly, this is the only line of output. If additional
lines of output are specified, these additional lines are written with a
space inserted in the first column, as with PRINT.
See PRINT for more information on syntax and usage.
PRINT SPACE
PRINT SPACE [OUTFILE='file_name'] [ENCODING='ENCODING'] [n_lines].
PRINT SPACE prints one or more blank lines to an output file.
The OUTFILE subcommand is optional. It may be used to direct output
to a file specified by file name as a string or file
handle. If OUTFILE is not
handle](../language/files/file-handles.md). If OUTFILE is not
specified then output is directed to the listing file.
The ENCODING subcommand may only be used if OUTFILE is also used.
It specifies the character encoding of the file. See
INSERT, for information on supported
encodings.
n_lines is also optional. If present, it is an
expression for the number of
blank lines to be printed. The expression must evaluate to a
nonnegative value.
REREAD
REREAD [FILE=handle] [COLUMN=column] [ENCODING='ENCODING'].
The REREAD transformation allows the previous input line in a data
file already processed by DATA LIST or another input command to be
re-read for further processing.
The FILE subcommand, which is optional, is used to specify the file
to have its line re-read. The file must be specified as the name of a
file handle. If FILE is not
file handle. If FILE is not
specified then the file specified on the most recent DATA LIST
command is assumed.
By default, the line re-read is re-read in its entirety. With the
COLUMN subcommand, a prefix of the line can be exempted from
re-reading. Specify an
expression evaluating to the
expression evaluating to the
first column that should be included in the re-read line. Columns are
numbered from 1 at the left margin.
The ENCODING subcommand may only be used if the FILE subcommand is
also used. It specifies the character encoding of the file. See
INSERT for information on supported encodings.
Issuing REREAD multiple times will not back up in the data file.
Instead, it will re-read the same line multiple times.
REPEATING DATA
REPEATING DATA
/STARTS=START-END
/OCCURS=N_OCCURS
/FILE='FILE_NAME'
/LENGTH=LENGTH
/CONTINUED[=CONT_START-CONT_END]
/ID=ID_START-ID_END=ID_VAR
/{TABLE,NOTABLE}
/DATA=VAR_SPEC...
where each VAR_SPEC takes one of the forms
VAR_LIST START-END [TYPE_SPEC]
VAR_LIST (FORTRAN_SPEC)
REPEATING DATA parses groups of data repeating in a uniform format,
possibly with several groups on a single line. Each group of data
corresponds with one case. REPEATING DATA may only be used within
INPUT PROGRAM. When used with DATA LIST, it can be used to parse groups of cases that
share a subset of variables but differ in their other data.
The STARTS subcommand is required. Specify a range of columns,
using literal numbers or numeric variable names. This range specifies
the columns on the first line that are used to contain groups of data.
The ending column is optional. If it is not specified, then the
record width of the input file is used. For the inline
file, this is 80 columns; for a file with fixed record
widths it is the record width; for other files it is 1024 characters
by default.
The OCCURS subcommand is required. It must be a number or the name
of a numeric variable. Its value is the number of groups present in the
current record.
The DATA subcommand is required. It must be the last subcommand
specified. It is used to specify the data present within each
repeating group. Column numbers are specified relative to the
beginning of a group at column 1. Data is specified in the same way
as with DATA LIST FIXED.
All other subcommands are optional.
FILE specifies the file to read, either a file name as a string or a
file handle. If FILE is not
file handle. If FILE is not
present then the default is the last file handle used on the most
recent DATA LIST command.
By default REPEATING DATA will output a table describing how it
will parse the input data. Specifying NOTABLE will disable this
behavior; specifying TABLE will explicitly enable it.
The LENGTH subcommand specifies the length in characters of each
group. If it is not present then length is inferred from the DATA
subcommand. LENGTH may be a number or a variable name.
Normally all the data groups are expected to be present on a single
line. Use the CONTINUED command to indicate that data can be
continued onto additional lines. If data on continuation lines starts
at the left margin and continues through the entire field width, no
column specifications are necessary on CONTINUED. Otherwise, specify
the possible range of columns in the same way as on STARTS.
When data groups are continued from line to line, it is easy for
cases to get out of sync through careless hand editing. The ID
subcommand allows a case identifier to be present on each line of
repeating data groups. REPEATING DATA will check for the same
identifier on each line and report mismatches. Specify the range of
columns that the identifier will occupy, followed by an equals sign
(=) and the identifier variable name. The variable must already have
been declared with NUMERIC or another command.
REPEATING DATA should be the last command given within an INPUT PROGRAM. It should not be enclosed within
LOOP…END LOOP. Use DATA LIST before, not after,
REPEATING DATA.
WRITE
WRITE
OUTFILE='FILE_NAME'
RECORDS=N_LINES
{NOTABLE,TABLE}
/[LINE_NO] ARG...
ARG takes one of the following forms:
'STRING' [START-END]
VAR_LIST START-END [TYPE_SPEC]
VAR_LIST (FORTRAN_SPEC)
VAR_LIST *
WRITE writes text or binary data to an output file. WRITE differs
from PRINT in only a few ways:
-
WRITEuses write formats by default, whereasPRINTuses print formats. -
PRINTinserts a space between variables unless a format is explicitly specified, butWRITEnever inserts space between variables in output. -
PRINTinserts a space at the beginning of each line that it writes to an output file (andPRINT EJECTinserts1at the beginning of each line that should begin a new page), butWRITEdoes not. -
PRINToutputs the system-missing value according to its specified output format, whereasWRITEoutputs the system-missing value as a field filled with spaces. Binary formats are an exception.
Working with SPSS Data Files
These commands read and write data files in SPSS and other proprietary or specialized data formats.
APPLY DICTIONARY
APPLY DICTIONARY FROM={'FILE_NAME',FILE_HANDLE}.
APPLY DICTIONARY applies the variable labels, value labels, and
missing values taken from a file to corresponding variables in the
active dataset. In some cases it also updates the weighting variable.
The FROM clause is mandatory. Use it to specify a system file or
portable file's name in single quotes, or a file handle
name. The dictionary in the
file is read, but it does not replace the active dataset's dictionary.
The file's data is not read.
Only variables with names that exist in both the active dataset and the system file are considered. Variables with the same name but different types (numeric, string) cause an error message. Otherwise, the system file variables' attributes replace those in their matching active dataset variables:
-
If a system file variable has a variable label, then it replaces the variable label of the active dataset variable. If the system file variable does not have a variable label, then the active dataset variable's variable label, if any, is retained.
-
If the system file variable has variable attributes, then those attributes replace the active dataset variable's variable attributes. If the system file variable does not have varaible attributes, then the active dataset variable's variable attributes, if any, is retained.
-
If the active dataset variable is numeric or short string, then value labels and missing values, if any, are copied to the active dataset variable. If the system file variable does not have value labels or missing values, then those in the active dataset variable, if any, are not disturbed.
In addition to properties of variables, some properties of the active file dictionary as a whole are updated:
-
If the system file has custom attributes (see DATAFILE ATTRIBUTE), then those attributes replace the active dataset variable's custom attributes.
-
If the active dataset has a weight variable, and the system file does not, or if the weighting variable in the system file does not exist in the active dataset, then the active dataset weighting variable, if any, is retained. Otherwise, the weighting variable in the system file becomes the active dataset weighting variable.
APPLY DICTIONARY takes effect immediately. It does not read the
active dataset. The system file is not modified.
EXPORT
EXPORT
/OUTFILE='FILE_NAME'
/UNSELECTED={RETAIN,DELETE}
/DIGITS=N
/DROP=VAR_LIST
/KEEP=VAR_LIST
/RENAME=(SRC_NAMES=TARGET_NAMES)...
/TYPE={COMM,TAPE}
/MAP
The EXPORT procedure writes the active dataset's dictionary and
data to a specified portable file.
EXPORTis obsolete and retained only for compatibility. New syntax should useSAVEinstead.
UNSELECTED controls whether cases excluded with
FILTER are written to the file. These can
be excluded by specifying DELETE on the UNSELECTED subcommand.
The default is RETAIN.
Portable files express real numbers in base 30. Integers are
always expressed to the maximum precision needed to make them exact.
Non-integers are, by default, expressed to the machine's maximum
natural precision (approximately 15 decimal digits on many machines).
If many numbers require this many digits, the portable file may
significantly increase in size. As an alternative, the DIGITS
subcommand may be used to specify the number of decimal digits of
precision to write. DIGITS applies only to non-integers.
The OUTFILE subcommand, which is the only required subcommand,
specifies the portable file to be written as a file name string or a
file handle.
DROP, KEEP, and RENAME have the same syntax and meaning as for
the SAVE command.
The TYPE subcommand specifies the character set for use in the
portable file. Its value is currently not used.
The MAP subcommand is currently ignored.
EXPORT is a procedure. It causes the active dataset to be read.
GET
GET
/FILE={'FILE_NAME',FILE_HANDLE}
/DROP=VAR_LIST
/KEEP=VAR_LIST
/RENAME=(SRC_NAMES=TARGET_NAMES)...
/ENCODING='ENCODING'
GET clears the current dictionary and active dataset and replaces
them with the dictionary and data from a specified file.
The FILE subcommand is the only required subcommand. Specify the
SPSS system file, SPSS/PC+ system file, or SPSS portable file to be
read as a string file name or a file
handle.
handle](../language/files/file-handles.md).
By default, all the variables in a file are read. The DROP
subcommand can be used to specify a list of variables that are not to
be read. By contrast, the KEEP subcommand can be used to specify
variable that are to be read, with all other variables not read.
Normally variables in a file retain the names that they were saved
under. Use the RENAME subcommand to change these names. Specify,
within parentheses, a list of variable names followed by an equals sign
(=) and the names that they should be renamed to. Multiple
parenthesized groups of variable names can be included on a single
RENAME subcommand. Variables' names may be swapped using a RENAME
subcommand of the form /RENAME=(A B=B A).
Alternate syntax for the RENAME subcommand allows the parentheses
to be omitted. When this is done, only a single variable may be
renamed at once. For instance, /RENAME=A=B. This alternate syntax
is discouraged.
DROP, KEEP, and RENAME are executed in left-to-right order.
Each may be present any number of times. GET never modifies a file on
disk. Only the active dataset read from the file is affected by these
subcommands.
PSPP automatically detects the encoding of string data in the file,
when possible. The character encoding of old SPSS system files cannot
always be guessed correctly, and SPSS/PC+ system files do not include
any indication of their encoding. Specify the ENCODING subcommand
with an IANA character set name as its string argument to override the
default. Use SYSFILE INFO to analyze the encodings that might be
valid for a system file. The ENCODING subcommand is a PSPP extension.
GET does not cause the data to be read, only the dictionary. The
data is read later, when a procedure is executed.
Use of GET to read a portable file is a PSPP extension.
GET DATA
GET DATA
/TYPE={GNM,ODS,PSQL,TXT}
...additional subcommands depending on TYPE...
The GET DATA command is used to read files and other data sources
created by other applications. When this command is executed, the
current dictionary and active dataset are replaced with variables and
data read from the specified source.
The TYPE subcommand is mandatory and must be the first subcommand
specified. It determines the type of the file or source to read.
PSPP currently supports the following TYPEs:
-
GNM
Spreadsheet files created by Gnumeric (http://gnumeric.org). -
ODS
Spreadsheet files in OpenDocument format (http://opendocumentformat.org). -
PSQL
Relations from PostgreSQL databases (http://postgresql.org). -
TXT
Textual data files in columnar and delimited formats.
Each supported file type has additional subcommands, explained in separate sections below.
Spreadsheet Files
GET DATA /TYPE={GNM, ODS}
/FILE={'FILE_NAME'}
/SHEET={NAME 'SHEET_NAME', INDEX N}
/CELLRANGE={RANGE 'RANGE', FULL}
/READNAMES={ON, OFF}
/ASSUMEDSTRWIDTH=N.
GET DATA can read Gnumeric spreadsheets (http://gnumeric.org), and
spreadsheets in OpenDocument format
(http://libreplanet.org/wiki/Group:OpenDocument/Software). Use the
TYPE subcommand to indicate the file's format. /TYPE=GNM
indicates Gnumeric files, /TYPE=ODS indicates OpenDocument. The
FILE subcommand is mandatory. Use it to specify the name file to be
read. All other subcommands are optional.
The format of each variable is determined by the format of the
spreadsheet cell containing the first datum for the variable. If this
cell is of string (text) format, then the width of the variable is
determined from the length of the string it contains, unless the
ASSUMEDSTRWIDTH subcommand is given.
The SHEET subcommand specifies the sheet within the spreadsheet
file to read. There are two forms of the SHEET subcommand. In the
first form, /SHEET=name SHEET_NAME, the string SHEET_NAME is the name
of the sheet to read. In the second form, /SHEET=index IDX, IDX is a
integer which is the index of the sheet to read. The first sheet has
the index 1. If the SHEET subcommand is omitted, then the command
reads the first sheet in the file.
The CELLRANGE subcommand specifies the range of cells within the
sheet to read. If the subcommand is given as /CELLRANGE=FULL, then
the entire sheet is read. To read only part of a sheet, use the form
/CELLRANGE=range 'TOP_LEFT_CELL:BOTTOM_RIGHT_CELL'. For example,
the subcommand /CELLRANGE=range 'C3:P19' reads columns C-P and rows
3-19, inclusive. Without the CELLRANGE subcommand, the entire sheet
is read.
If /READNAMES=ON is specified, then the contents of cells of the
first row are used as the names of the variables in which to store the
data from subsequent rows. This is the default. If /READNAMES=OFF is
used, then the variables receive automatically assigned names.
The ASSUMEDSTRWIDTH subcommand specifies the maximum width of
string variables read from the file. If omitted, the default value is
determined from the length of the string in the first spreadsheet cell
for each variable.
Postgres Database Queries
GET DATA /TYPE=PSQL
/CONNECT={CONNECTION INFO}
/SQL={QUERY}
[/ASSUMEDSTRWIDTH=W]
[/UNENCRYPTED]
[/BSIZE=N].
GET DATA /TYPE=PSQL imports data from a local or remote Postgres
database server. It automatically creates variables based on the table
column names or the names specified in the SQL query. PSPP cannot
support the full precision of some Postgres data types, so data of those
types will lose some precision when PSPP imports them. PSPP does not
support all Postgres data types. If PSPP cannot support a datum, GET DATA issues a warning and substitutes the system-missing value.
The CONNECT subcommand must be a string for the parameters of the
database server from which the data should be fetched. The format of
the string is given in the Postgres
manual.
The SQL subcommand must be a valid SQL statement to retrieve data
from the database.
The ASSUMEDSTRWIDTH subcommand specifies the maximum width of
string variables read from the database. If omitted, the default value
is determined from the length of the string in the first value read for
each variable.
The UNENCRYPTED subcommand allows data to be retrieved over an
insecure connection. If the connection is not encrypted, and the
UNENCRYPTED subcommand is not given, then an error occurs. Whether or
not the connection is encrypted depends upon the underlying psql library
and the capabilities of the database server.
The BSIZE subcommand serves only to optimise the speed of data
transfer. It specifies an upper limit on number of cases to fetch from
the database at once. The default value is 4096. If your SQL statement
fetches a large number of cases but only a small number of variables,
then the data transfer may be faster if you increase this value.
Conversely, if the number of variables is large, or if the machine on
which PSPP is running has only a small amount of memory, then a smaller
value is probably better.
Example
GET DATA /TYPE=PSQL
/CONNECT='host=example.com port=5432 dbname=product user=fred passwd=xxxx'
/SQL='select * from manufacturer'.
Textual Data Files
GET DATA /TYPE=TXT
/FILE={'FILE_NAME',FILE_HANDLE}
[ENCODING='ENCODING']
[/ARRANGEMENT={DELIMITED,FIXED}]
[/FIRSTCASE={FIRST_CASE}]
[/IMPORTCASES=...]
...additional subcommands depending on ARRANGEMENT...
When TYPE=TXT is specified, GET DATA reads data in a delimited
or fixed columnar format, much like DATA LIST.
The FILE subcommand must specify the file to be read as a string
file name or (for textual data only) a file
handle).
The ENCODING subcommand specifies the character encoding of the
file to be read. See INSERT, for information on
supported encodings.
The ARRANGEMENT subcommand determines the file's basic format.
DELIMITED, the default setting, specifies that fields in the input data
are separated by spaces, tabs, or other user-specified delimiters.
FIXED specifies that fields in the input data appear at particular fixed
column positions within records of a case.
By default, cases are read from the input file starting from the
first line. To skip lines at the beginning of an input file, set
FIRSTCASE to the number of the first line to read: 2 to skip the
first line, 3 to skip the first two lines, and so on.
IMPORTCASES is ignored, for compatibility. Use N OF CASES to limit the number of cases read from a file, or
SAMPLE to obtain a random sample of cases.
The remaining subcommands apply only to one of the two file arrangements, described below.
Delimited Data
GET DATA /TYPE=TXT
/FILE={'FILE_NAME',FILE_HANDLE}
[/ARRANGEMENT={DELIMITED,FIXED}]
[/FIRSTCASE={FIRST_CASE}]
[/IMPORTCASE={ALL,FIRST MAX_CASES,PERCENT PERCENT}]
/DELIMITERS="DELIMITERS"
[/QUALIFIER="QUOTES"
[/DELCASE={LINE,VARIABLES N_VARIABLES}]
/VARIABLES=DEL_VAR1 [DEL_VAR2]...
where each DEL_VAR takes the form:
variable format
The GET DATA command with TYPE=TXT and ARRANGEMENT=DELIMITED
reads input data from text files in delimited format, where fields are
separated by a set of user-specified delimiters. Its capabilities are
similar to those of DATA LIST FREE,
with a few enhancements.
The required FILE subcommand and optional FIRSTCASE and
IMPORTCASE subcommands are described above.
DELIMITERS, which is required, specifies the set of characters that
may separate fields. Each character in the string specified on
DELIMITERS separates one field from the next. The end of a line also
separates fields, regardless of DELIMITERS. Two consecutive
delimiters in the input yield an empty field, as does a delimiter at the
end of a line. A space character as a delimiter is an exception:
consecutive spaces do not yield an empty field and neither does any
number of spaces at the end of a line.
To use a tab as a delimiter, specify \t at the beginning of the
DELIMITERS string. To use a backslash as a delimiter, specify \\ as
the first delimiter or, if a tab should also be a delimiter, immediately
following \t. To read a data file in which each field appears on a
separate line, specify the empty string for DELIMITERS.
The optional QUALIFIER subcommand names one or more characters that
can be used to quote values within fields in the input. A field that
begins with one of the specified quote characters ends at the next
matching quote. Intervening delimiters become part of the field,
instead of terminating it. The ability to specify more than one quote
character is a PSPP extension.
The character specified on QUALIFIER can be embedded within a field
that it quotes by doubling the qualifier. For example, if ' is
specified on QUALIFIER, then 'a''b' specifies a field that contains
a'b.
The DELCASE subcommand controls how data may be broken across
lines in the data file. With LINE, the default setting, each line
must contain all the data for exactly one case. For additional
flexibility, to allow a single case to be split among lines or
multiple cases to be contained on a single line, specify VARIABLES n_variables, where n_variables is the number of variables per case.
The VARIABLES subcommand is required and must be the last
subcommand. Specify the name of each variable and its input
format, in the order they
should be read from the input file.
Example 1
On a Unix-like system, the /etc/passwd file has a format similar to
this:
root:$1$nyeSP5gD$pDq/:0:0:,,,:/root:/bin/bash
blp:$1$BrP/pFg4$g7OG:1000:1000:Ben Pfaff,,,:/home/blp:/bin/bash
john:$1$JBuq/Fioq$g4A:1001:1001:John Darrington,,,:/home/john:/bin/bash
jhs:$1$D3li4hPL$88X1:1002:1002:Jason Stover,,,:/home/jhs:/bin/csh
The following syntax reads a file in the format used by /etc/passwd:
GET DATA /TYPE=TXT /FILE='/etc/passwd' /DELIMITERS=':'
/VARIABLES=username A20
password A40
uid F10
gid F10
gecos A40
home A40
shell A40.
Example 2
Consider the following data on used cars:
model year mileage price type age
Civic 2002 29883 15900 Si 2
Civic 2003 13415 15900 EX 1
Civic 1992 107000 3800 n/a 12
Accord 2002 26613 17900 EX 1
The following syntax can be used to read the used car data:
GET DATA /TYPE=TXT /FILE='cars.data' /DELIMITERS=' ' /FIRSTCASE=2
/VARIABLES=model A8
year F4
mileage F6
price F5
type A4
age F2.
Example 3
Consider the following information on animals in a pet store:
'Pet''s Name', "Age", "Color", "Date Received", "Price", "Height", "Type"
, (Years), , , (Dollars), ,
"Rover", 4.5, Brown, "12 Feb 2004", 80, '1''4"', "Dog"
"Charlie", , Gold, "5 Apr 2007", 12.3, "3""", "Fish"
"Molly", 2, Black, "12 Dec 2006", 25, '5"', "Cat"
"Gilly", , White, "10 Apr 2007", 10, "3""", "Guinea Pig"
The following syntax can be used to read the pet store data:
GET DATA /TYPE=TXT /FILE='pets.data' /DELIMITERS=', ' /QUALIFIER='''"' /ESCAPE
/FIRSTCASE=3
/VARIABLES=name A10
age F3.1
color A5
received EDATE10
price F5.2
height a5
type a10.
Fixed Columnar Data
GET DATA /TYPE=TXT
/FILE={'file_name',FILE_HANDLE}
[/ARRANGEMENT={DELIMITED,FIXED}]
[/FIRSTCASE={FIRST_CASE}]
[/IMPORTCASE={ALL,FIRST MAX_CASES,PERCENT PERCENT}]
[/FIXCASE=N]
/VARIABLES FIXED_VAR [FIXED_VAR]...
[/rec# FIXED_VAR [FIXED_VAR]...]...
where each FIXED_VAR takes the form:
VARIABLE START-END FORMAT
The GET DATA command with TYPE=TXT and ARRANGEMENT=FIXED
reads input data from text files in fixed format, where each field is
located in particular fixed column positions within records of a case.
Its capabilities are similar to those of DATA LIST FIXED, with a few enhancements.
The required FILE subcommand and optional FIRSTCASE and
IMPORTCASE subcommands are described above.
The optional FIXCASE subcommand may be used to specify the positive
integer number of input lines that make up each case. The default value
is 1.
The VARIABLES subcommand, which is required, specifies the
positions at which each variable can be found. For each variable,
specify its name, followed by its start and end column separated by -
(e.g. 0-9), followed by an input format type (e.g. F) or a full
format specification (e.g. DOLLAR12.2). For this command, columns are
numbered starting from 0 at the left column. Introduce the variables in
the second and later lines of a case by a slash followed by the number
of the line within the case, e.g. /2 for the second line.
Example
Consider the following data on used cars:
model year mileage price type age
Civic 2002 29883 15900 Si 2
Civic 2003 13415 15900 EX 1
Civic 1992 107000 3800 n/a 12
Accord 2002 26613 17900 EX 1
The following syntax can be used to read the used car data:
GET DATA /TYPE=TXT /FILE='cars.data' /ARRANGEMENT=FIXED /FIRSTCASE=2
/VARIABLES=model 0-7 A
year 8-15 F
mileage 16-23 F
price 24-31 F
type 32-40 A
age 40-47 F.
IMPORT
IMPORT
/FILE='FILE_NAME'
/TYPE={COMM,TAPE}
/DROP=VAR_LIST
/KEEP=VAR_LIST
/RENAME=(SRC_NAMES=TARGET_NAMES)...
The IMPORT transformation clears the active dataset dictionary and
data and replaces them with a dictionary and data from a system file or
portable file.
IMPORTis obsolete and retained only for compatibility with existing portable files. New syntax should useSAVEto write system files instead, andGETto read them.
The FILE subcommand, which is the only required subcommand,
specifies the portable file to be read as a file name string or a
file handle.
The TYPE subcommand is currently not used.
DROP, KEEP, and RENAME follow the syntax used by
GET.
IMPORT does not cause the data to be read; only the dictionary.
The data is read later, when a procedure is executed.
Use of IMPORT to read a system file is a PSPP extension.
SAVE
SAVE
/OUTFILE={'FILE_NAME',FILE_HANDLE}
/UNSELECTED={RETAIN,DELETE}
/{UNCOMPRESSED,COMPRESSED,ZCOMPRESSED}
/PERMISSIONS={WRITEABLE,READONLY}
/DROP=VAR_LIST
/KEEP=VAR_LIST
/VERSION=VERSION
/RENAME=(SRC_NAMES=TARGET_NAMES)...
/NAMES
/MAP
The SAVE procedure causes the dictionary and data in the active
dataset to be written to a system file.
OUTFILE is the only required subcommand. Specify the system file
to be written as a string file name or a file
handle.
handle](../language/files/file-handles.md).
By default, cases excluded with FILTER are written to the system
file. These can be excluded by specifying DELETE on the UNSELECTED
subcommand. Specifying RETAIN makes the default explicit.
The UNCOMPRESSED, COMPRESSED, and ZCOMPRESSED subcommand
determine the system file's compression level:
-
UNCOMPRESSED
Data is not compressed. Each numeric value uses 8 bytes of disk space. Each string value uses one byte per column width, rounded up to a multiple of 8 bytes. -
COMPRESSED
Data is compressed in a simple way. Each integer numeric value between −99 and 151, inclusive, or system missing value uses one byte of disk space. Each 8-byte segment of a string that consists only of spaces uses 1 byte. Any other numeric value or 8-byte string segment uses 9 bytes of disk space. -
ZCOMPRESSED
Data is compressed with the "deflate" compression algorithm specified in RFC 1951 (the same algorithm used bygzip). Files written with this compression level cannot be read by PSPP 0.8.1 or earlier or by SPSS 20 or earlier.
COMPRESSED is the default compression level. The SET
command can change this default.
The PERMISSIONS subcommand specifies operating system permissions
for the new system file. WRITEABLE, the default, creates the file
with read and write permission. READONLY creates the file for
read-only access.
By default, all the variables in the active dataset dictionary are
written to the system file. The DROP subcommand can be used to
specify a list of variables not to be written. In contrast, KEEP
specifies variables to be written, with all variables not specified
not written.
Normally variables are saved to a system file under the same names
they have in the active dataset. Use the RENAME subcommand to change
these names. Specify, within parentheses, a list of variable names
followed by an equals sign (=) and the names that they should be
renamed to. Multiple parenthesized groups of variable names can be
included on a single RENAME subcommand. Variables' names may be
swapped using a RENAME subcommand of the form /RENAME=(A B=B A).
Alternate syntax for the RENAME subcommand allows the parentheses to
be eliminated. When this is done, only a single variable may be
renamed at once. For instance, /RENAME=A=B. This alternate syntax
is discouraged.
DROP, KEEP, and RENAME are performed in left-to-right order.
They each may be present any number of times. SAVE never modifies
the active dataset. DROP, KEEP, and RENAME only affect the
system file written to disk.
The VERSION subcommand specifies the version of the file format.
Valid versions are 2 and 3. The default version is 3. In version 2
system files, variable names longer than 8 bytes are truncated. The
two versions are otherwise identical.
The NAMES and MAP subcommands are currently ignored.
SAVE causes the data to be read. It is a procedure.
SAVE DATA COLLECTION
SAVE DATA COLLECTION
/OUTFILE={'FILE_NAME',FILE_HANDLE}
/METADATA={'FILE_NAME',FILE_HANDLE}
/{UNCOMPRESSED,COMPRESSED,ZCOMPRESSED}
/PERMISSIONS={WRITEABLE,READONLY}
/DROP=VAR_LIST
/KEEP=VAR_LIST
/VERSION=VERSION
/RENAME=(SRC_NAMES=TARGET_NAMES)...
/NAMES
/MAP
Like SAVE, SAVE DATA COLLECTION writes the dictionary and data in
the active dataset to a system file. In addition, it writes metadata to
an additional XML metadata file.
OUTFILE is required. Specify the system file to be written as a
string file name or a file
handle.
handle](../language/files/file-handles.md).
METADATA is also required. Specify the metadata file to be written
as a string file name or a file handle. Metadata files customarily use
a .mdd extension.
The current implementation of this command is experimental. It only outputs an approximation of the metadata file format. Please report bugs.
Other subcommands are optional. They have the same meanings as in
the SAVE command.
SAVE DATA COLLECTION causes the data to be read. It is a procedure.
SAVE TRANSLATE
SAVE TRANSLATE
/OUTFILE={'FILE_NAME',FILE_HANDLE}
/TYPE={CSV,TAB}
[/REPLACE]
[/MISSING={IGNORE,RECODE}]
[/DROP=VAR_LIST]
[/KEEP=VAR_LIST]
[/RENAME=(SRC_NAMES=TARGET_NAMES)...]
[/UNSELECTED={RETAIN,DELETE}]
[/MAP]
...additional subcommands depending on TYPE...
The SAVE TRANSLATE command is used to save data into various
formats understood by other applications.
The OUTFILE and TYPE subcommands are mandatory. OUTFILE
specifies the file to be written, as a string file name or a file
handle. TYPE determines the
handle](../language/files/file-handles.md). TYPE determines the
type of the file or source to read. It must be one of the following:
-
CSV
Comma-separated value format, -
TAB
Tab-delimited format.
By default, SAVE TRANSLATE does not overwrite an existing file.
Use REPLACE to force an existing file to be overwritten.
With MISSING=IGNORE, the default, SAVE TRANSLATE treats
user-missing values as if they were not missing. Specify
MISSING=RECODE to output numeric user-missing values like
system-missing values and string user-missing values as all spaces.
By default, all the variables in the active dataset dictionary are
saved to the system file, but DROP or KEEP can select a subset of
variable to save. The RENAME subcommand can also be used to change
the names under which variables are saved; because they are used only
in the output, these names do not have to conform to the usual PSPP
variable naming rules. UNSELECTED determines whether cases filtered
out by the FILTER command are written to the output file. These
subcommands have the same syntax and meaning as on the
SAVE command.
Each supported file type has additional subcommands, explained in separate sections below.
SAVE TRANSLATE causes the data to be read. It is a procedure.
Comma- and Tab-Separated Data Files
SAVE TRANSLATE
/OUTFILE={'FILE_NAME',FILE_HANDLE}
/TYPE=CSV
[/REPLACE]
[/MISSING={IGNORE,RECODE}]
[/DROP=VAR_LIST]
[/KEEP=VAR_LIST]
[/RENAME=(SRC_NAMES=TARGET_NAMES)...]
[/UNSELECTED={RETAIN,DELETE}]
[/FIELDNAMES]
[/CELLS={VALUES,LABELS}]
[/TEXTOPTIONS DELIMITER='DELIMITER']
[/TEXTOPTIONS QUALIFIER='QUALIFIER']
[/TEXTOPTIONS DECIMAL={DOT,COMMA}]
[/TEXTOPTIONS FORMAT={PLAIN,VARIABLE}]
The SAVE TRANSLATE command with TYPE=CSV or TYPE=TAB writes data in a
comma- or tab-separated value format similar to that described by
RFC 4180. Each variable becomes one output column, and each case
becomes one line of output. If FIELDNAMES is specified, an additional
line at the top of the output file lists variable names.
The CELLS and TEXTOPTIONS FORMAT settings determine how values are
written to the output file:
-
CELLS=VALUES FORMAT=PLAIN(the default settings)
Writes variables to the output in "plain" formats that ignore the details of variable formats. Numeric values are written as plain decimal numbers with enough digits to indicate their exact values in machine representation. Numeric values includeefollowed by an exponent if the exponent value would be less than -4 or greater than 16. Dates are written in MM/DD/YYYY format and times in HH:MM:SS format.WKDAYandMONTHvalues are written as decimal numbers.Numeric values use, by default, the decimal point character set with
SET DECIMAL. UseDECIMAL=DOTorDECIMAL=COMMAto force a particular decimal point character. -
CELLS=VALUES FORMAT=VARIABLE
Writes variables using their print formats. Leading and trailing spaces are removed from numeric values, and trailing spaces are removed from string values. -
CELLS=LABEL FORMAT=PLAIN
CELLS=LABEL FORMAT=VARIABLE
Writes value labels where they exist, and otherwise writes the values themselves as described above.Regardless of
CELLSandTEXTOPTIONS FORMAT, numeric system-missing values are output as a single space.For
TYPE=TAB, tab characters delimit values. ForTYPE=CSV, theTEXTOPTIONS DELIMITERandDECIMALsettings determine the character that separate values within a line. IfDELIMITERis specified, then the specified string separate values. IfDELIMITERis not specified, then the default is a comma withDECIMAL=DOTor a semicolon withDECIMAL=COMMA. IfDECIMALis not given either, it is inferred from the decimal point character set withSET DECIMAL.The
TEXTOPTIONS QUALIFIERsetting specifies a character that is output before and after a value that contains the delimiter character or the qualifier character. The default is a double quote ("). A qualifier character that appears within a value is doubled.
SYSFILE INFO
SYSFILE INFO FILE='FILE_NAME' [ENCODING='ENCODING'].
SYSFILE INFO reads the dictionary in an SPSS system file, SPSS/PC+
system file, or SPSS portable file, and displays the information in
its dictionary.
Specify a file name or file handle. SYSFILE INFO reads that file
and displays information on its dictionary.
PSPP automatically detects the encoding of string data in the file,
when possible. The character encoding of old SPSS system files cannot
always be guessed correctly, and SPSS/PC+ system files do not include
any indication of their encoding. Specify the ENCODING subcommand
with an IANA character set name as its string argument to override the
default, or specify ENCODING='DETECT' to analyze and report possibly
valid encodings for the system file. The ENCODING subcommand is a
PSPP extension.
SYSFILE INFO does not affect the current active dataset.
XEXPORT
XEXPORT
/OUTFILE='FILE_NAME'
/DIGITS=N
/DROP=VAR_LIST
/KEEP=VAR_LIST
/RENAME=(SRC_NAMES=TARGET_NAMES)...
/TYPE={COMM,TAPE}
/MAP
The XEXPORT transformation writes the active dataset dictionary and
data to a specified portable file.
This transformation is a PSPP extension.
It is similar to the EXPORT procedure, with two differences:
-
XEXPORTis a transformation, not a procedure. It is executed when the data is read by a procedure or procedure-like command. -
XEXPORTdoes not support theUNSELECTEDsubcommand.
See EXPORT for more information.
XSAVE
XSAVE
/OUTFILE='FILE_NAME'
/{UNCOMPRESSED,COMPRESSED,ZCOMPRESSED}
/PERMISSIONS={WRITEABLE,READONLY}
/DROP=VAR_LIST
/KEEP=VAR_LIST
/VERSION=VERSION
/RENAME=(SRC_NAMES=TARGET_NAMES)...
/NAMES
/MAP
The XSAVE transformation writes the active dataset's dictionary and
data to a system file. It is similar to the SAVE procedure, with
two differences:
-
XSAVEis a transformation, not a procedure. It is executed when the data is read by a procedure or procedure-like command. -
XSAVEdoes not support theUNSELECTEDsubcommand.
See SAVE for more information.
Combining Data Files
This chapter describes commands that allow data from system files, portable files, and open datasets to be combined to form a new active dataset. These commands can combine data files in the following ways:
-
ADD FILESinterleaves or appends the cases from each input file. It is used with input files that have variables in common, but distinct sets of cases. -
MATCH FILESadds the data together in cases that match across multiple input files. It is used with input files that have cases in common, but different information about each case. -
UPDATEupdates a master data file from data in a set of transaction files. Each case in a transaction data file modifies a matching case in the primary data file, or it adds a new case if no matching case can be found.
These commands share the majority of their syntax, described below. Each command's documentation explains its additional syntax.
Common Syntax
Per input file:
/FILE={*,'FILE_NAME'}
[/RENAME=(SRC_NAMES=TARGET_NAMES)...]
[/IN=VAR_NAME]
[/SORT]
Once per command:
/BY VAR_LIST[({D|A})] [VAR_LIST[({D|A}]]...
[/DROP=VAR_LIST]
[/KEEP=VAR_LIST]
[/FIRST=VAR_NAME]
[/LAST=VAR_NAME]
[/MAP]
Each of these commands reads two or more input files and combines them. The command's output becomes the new active dataset. None of the commands actually change the input files. Therefore, if you want the changes to become permanent, you must explicitly save them using an appropriate procedure or transformation.
The syntax of each command begins with a specification of the files to
be read as input. For each input file, specify FILE with a system
file or portable file's name as a string, a
dataset or file
handle name, or an asterisk (*)
to use the active dataset as input. Use of portable files on FILE
is a PSPP extension.
At least two FILE subcommands must be specified. If the active
dataset is used as an input source, then TEMPORARY must not be in
effect.
Each FILE subcommand may be followed by any number of RENAME
subcommands that specify a parenthesized group or groups of variable
names as they appear in the input file, followed by those variables'
new names, separated by an equals sign (=), e.g.
/RENAME=(OLD1=NEW1)(OLD2=NEW2). To rename a single variable, the
parentheses may be omitted: /RENAME=OLD=NEW. Within a parenthesized
group, variables are renamed simultaneously, so that /RENAME=(A B=B A) exchanges the names of variables A and B. Otherwise, renaming
occurs in left-to-right order.
Each FILE subcommand may optionally be followed by a single IN
subcommand, which creates a numeric variable with the specified name
and format F1.0. The IN variable takes value 1 in an output case
if the given input file contributed to that output case, and 0
otherwise. The DROP, KEEP, and RENAME subcommands have no
effect on IN variables.
If BY is used (see below), the SORT keyword must be specified
after a FILE if that input file is not already sorted on the BY
variables. When SORT is specified, PSPP sorts the input file's data
on the BY variables before it applies it to the command. When
SORT is used, BY is required. SORT is a PSPP extension.
PSPP merges the dictionaries of all of the input files to form the dictionary of the new active dataset, like so:
-
The variables in the new active dataset are the union of all the input files' variables, matched based on their name. When a single input file contains a variable with a given name, the output file will contain exactly that variable. When more than one input file contains a variable with a given name, those variables must all have the same type (numeric or string) and, for string variables, the same width. Variables are matched after renaming with the
RENAMEsubcommand. Thus,RENAMEcan be used to resolve conflicts. -
The variable label for each output variable is taken from the first specified input file that has a variable label for that variable, and similarly for value labels and missing values.
-
The file label of the new active dataset is that of the first specified
FILEthat has a file label. -
The documents in the new active dataset are the concatenation of all the input files' documents, in the order in which the
FILEsubcommands are specified. -
If all of the input files are weighted on the same variable, then the new active dataset is weighted on that variable. Otherwise, the new active dataset is not weighted.
The remaining subcommands apply to the output file as a whole, rather
than to individual input files. They must be specified at the end of
the command specification, following all of the FILE and related
subcommands. The most important of these subcommands is BY, which
specifies a set of one or more variables that may be used to find
corresponding cases in each of the input files. The variables
specified on BY must be present in all of the input files.
Furthermore, if any of the input files are not sorted on the BY
variables, then SORT must be specified for those input files.
The variables listed on BY may include (A) or (D) annotations to
specify ascending or descending sort order. See SORT CASES, for more details on this notation. Adding
(A) or (D) to the BY subcommand specification is a PSPP
extension.
The DROP subcommand can be used to specify a list of variables to
exclude from the output. By contrast, the KEEP subcommand can be
used to specify variables to include in the output; all variables not
listed are dropped. DROP and KEEP are executed in left-to-right
order and may be repeated any number of times. DROP and KEEP do
not affect variables created by the IN, FIRST, and LAST
subcommands, which are always included in the new active dataset, but
they can be used to drop BY variables.
The FIRST and LAST subcommands are optional. They may only be
specified on MATCH FILES and ADD FILES, and only when BY is
used. FIRST and LIST each adds a numeric variable to the new
active dataset, with the name given as the subcommand's argument and
F1.0 print and write formats. The value of the FIRST variable is
1 in the first output case with a given set of values for the BY
variables, and 0 in other cases. Similarly, the LAST variable is 1
in the last case with a given of BY values, and 0 in other cases.
When any of these commands creates an output case, variables that are only in files that are not present for the current case are set to the system-missing value for numeric variables or spaces for string variables.
These commands may combine any number of files, limited only by the machine's memory.
ADD FILES
ADD FILES
Per input file:
/FILE={*,'FILE_NAME'}
[/RENAME=(SRC_NAMES=TARGET_NAMES)...]
[/IN=VAR_NAME]
[/SORT]
Once per command:
[/BY VAR_LIST[({D|A})] [VAR_LIST[({D|A})]...]]
[/DROP=VAR_LIST]
[/KEEP=VAR_LIST]
[/FIRST=VAR_NAME]
[/LAST=VAR_NAME]
[/MAP]
ADD FILES adds cases from multiple input files. The output, which
replaces the active dataset, consists all of the cases in all of the
input files.
ADD FILES shares the bulk of its syntax with other PSPP commands for
combining multiple data files (see Common
Syntax for details).
When BY is not used, the output of ADD FILES consists of all the
cases from the first input file specified, followed by all the cases
from the second file specified, and so on. When BY is used, the
output is additionally sorted on the BY variables.
When ADD FILES creates an output case, variables that are not part
of the input file from which the case was drawn are set to the
system-missing value for numeric variables or spaces for string
variables.
MATCH FILES
MATCH FILES
Per input file:
/{FILE,TABLE}={*,'FILE_NAME'}
[/RENAME=(SRC_NAMES=TARGET_NAMES)...]
[/IN=VAR_NAME]
[/SORT]
Once per command:
/BY VAR_LIST[({D|A}] [VAR_LIST[({D|A})]...]
[/DROP=VAR_LIST]
[/KEEP=VAR_LIST]
[/FIRST=VAR_NAME]
[/LAST=VAR_NAME]
[/MAP]
MATCH FILES merges sets of corresponding cases in multiple input
files into single cases in the output, combining their data.
MATCH FILES shares the bulk of its syntax with other PSPP commands
for combining multiple data files (see Common
Syntax for details).
How MATCH FILES matches up cases from the input files depends on
whether BY is specified:
-
If
BYis not used,MATCH FILEScombines the first case from each input file to produce the first output case, then the second case from each input file for the second output case, and so on. If some input files have fewer cases than others, then the shorter files do not contribute to cases output after their input has been exhausted. -
If
BYis used,MATCH FILEScombines cases from each input file that have identical values for theBYvariables.When
BYis used,TABLEsubcommands may be used to introduce "table lookup files".TABLEhas same syntax asFILE, and theRENAME,IN, andSORTsubcommands may follow aTABLEin the same way asFILE. Regardless of the number ofTABLEs, at least oneFILEmust specified. Table lookup files are treated in the same way as other input files for most purposes and, in particular, table lookup files must be sorted on theBYvariables or theSORTsubcommand must be specified for thatTABLE.Cases in table lookup files are not consumed after they have been used once. This means that data in table lookup files can correspond to any number of cases in
FILEinput files. Table lookup files are analogous to lookup tables in traditional relational database systems.If a table lookup file contains more than one case with a given set of
BYvariables, only the first case is used.
When MATCH FILES creates an output case, variables that are only in
files that are not present for the current case are set to the
system-missing value for numeric variables or spaces for string
variables.
UPDATE
UPDATE
Per input file:
/FILE={*,'FILE_NAME'}
[/RENAME=(SRC_NAMES=TARGET_NAMES)...]
[/IN=VAR_NAME]
[/SORT]
Once per command:
/BY VAR_LIST[({D|A})] [VAR_LIST[({D|A})]]...
[/DROP=VAR_LIST]
[/KEEP=VAR_LIST]
[/MAP]
UPDATE updates a "master file" by applying modifications from one
or more "transaction files".
UPDATE shares the bulk of its syntax with other PSPP commands for
combining multiple data files (see Common
Syntax for details).
At least two FILE subcommands must be specified. The first FILE
subcommand names the master file, and the rest name transaction files.
Every input file must either be sorted on the variables named on the
BY subcommand, or the SORT subcommand must be used just after the
FILE subcommand for that input file.
UPDATE uses the variables specified on the BY subcommand, which
is required, to attempt to match each case in a transaction file with a
case in the master file:
-
When a match is found, then the values of the variables present in the transaction file replace those variables' values in the new active file. If there are matching cases in more than more transaction file, PSPP applies the replacements from the first transaction file, then from the second transaction file, and so on. Similarly, if a single transaction file has cases with duplicate
BYvalues, then those are applied in order to the master file.When a variable in a transaction file has a missing value or when a string variable's value is all blanks, that value is never used to update the master file.
-
If a case in the master file has no matching case in any transaction file, then it is copied unchanged to the output.
-
If a case in a transaction file has no matching case in the master file, then it causes a new case to be added to the output, initialized from the values in the transaction file.
Manipulating Variables
Every value in a dataset is associated with a variable. Variables describe variable. Variables describe what the values represent and properties of those values, such as the format in which they should be displayed, whether they are numeric or alphabetic and how missing values should be represented. There are several utility commands for examining and adjusting variables.
ADD VALUE LABELS
ADD VALUE LABELS has the same syntax and purpose as VALUE LABELS, but it does not clear value labels from the
variables before adding the ones specified.
ADD VALUE LABELS
/VAR_LIST VALUE 'LABEL' [VALUE 'LABEL']...
DELETE VARIABLES
DELETE VARIABLES deletes the specified variables from the dictionary.
DELETE VARIABLES VAR_LIST.
DELETE VARIABLES should not be used after defining transformations
but before executing a procedure. If it is anyhow, it causes the data
to be read. If it is used while TEMPORARY is in effect, it causes
the temporary transformations to become permanent.
DELETE VARIABLES may not be used to delete all variables from the
dictionary; use NEW FILE instead.
DISPLAY
The DISPLAY command displays information about the variables in the
active dataset. A variety of different forms of information can be
requested. By default, all variables in the active dataset are
displayed. However you can select variables of interest using the
/VARIABLES subcommand.
DISPLAY [SORTED] NAMES [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] INDEX [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] LABELS [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] VARIABLES [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] DICTIONARY [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] SCRATCH [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] ATTRIBUTES [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] @ATTRIBUTES [[/VARIABLES=]VAR_LIST].
DISPLAY [SORTED] VECTORS.
The following keywords primarily cause information about variables to
be displayed. With these keywords, by default information is
displayed about all variable in the active dataset, in the order that
variables occur in the active dataset dictionary. The SORTED
keyword causes output to be sorted alphabetically by variable name.
-
NAMES
The variables' names are displayed. -
INDEX
The variables' names are displayed along with a value describing their position within the active dataset dictionary. -
LABELS
Variable names, positions, and variable labels are displayed. -
VARIABLES
Variable names, positions, print and write formats, and missing values are displayed. -
DICTIONARY
Variable names, positions, print and write formats, missing values, variable labels, and value labels are displayed. -
SCRATCH
Displays Variablen ames, for scratch variables only. variables](../language/datasets/scratch-variables.md) only. -
ATTRIBUTES
Datafile and variable attributes are displayed, except attributes whose names begin with@or$@. -
@ATTRIBUTES
All datafile and variable attributes, even those whose names begin with@or$@.
With the VECTOR keyword, DISPLAY lists all the currently declared
vectors. If the SORTED keyword is given, the vectors are listed in
alphabetical order; otherwise, they are listed in textual order of
definition within the PSPP syntax file.
For related commands, see DISPLAY DOCUMENTS
and DISPLAY FILE LABEL.
FORMATS
FORMATS VAR_LIST (FMT_SPEC) [VAR_LIST (FMT_SPEC)]....
FORMATS set both print and write formats for the specified variables
to the specified output
format.
format](../language/datasets/formats/index.md).
Specify a list of variables followed by a format specification in parentheses. The print and write formats of the specified variables will be changed. All of the variables listed together must have the same type and, for string variables, the same width.
Additional lists of variables and formats may be included following the first one.
FORMATS takes effect immediately. It is not affected by conditional
and looping structures such as DO IF or LOOP.
LEAVE
LEAVE prevents the specified variables from being reinitialized
whenever a new case is processed.
LEAVE VAR_LIST.
Normally, when a data file is processed, every variable in the active
dataset is initialized to the system-missing value or spaces at the
beginning of processing for each case. When a variable has been
specified on LEAVE, this is not the case. Instead, that variable is
initialized to 0 (not system-missing) or spaces for the first case.
After that, it retains its value between cases.
This becomes useful for counters. For instance, in the example below
the variable SUM maintains a running total of the values in the
ITEM variable.
DATA LIST /ITEM 1-3.
COMPUTE SUM=SUM+ITEM.
PRINT /ITEM SUM.
LEAVE SUM
BEGIN DATA.
123
404
555
999
END DATA.
Partial output from this example:
123 123.00
404 527.00
555 1082.00
999 2081.00
It is best to use LEAVE command immediately before invoking a
procedure command, because the left status of variables is reset by
certain transformations—for instance, COMPUTE and IF. Left status
is also reset by all procedure invocations.
MISSING VALUES
In many situations, the data available for analysis is incomplete, so that a placeholder must be used to indicate that the value is unknown. One way that missing values are represented, for numeric data, is the "system-missing value". "system-missing value". Another, more flexible way is through "user-missing values" which are determined on a per variable basis.
The MISSING VALUES command sets user-missing values for variables.
MISSING VALUES VAR_LIST (MISSING_VALUES).
where MISSING_VALUES takes one of the following forms:
NUM1
NUM1, NUM2
NUM1, NUM2, NUM3
NUM1 THRU NUM2
NUM1 THRU NUM2, NUM3
STRING1
STRING1, STRING2
STRING1, STRING2, STRING3
As part of a range, `LO` or `LOWEST` may take the place of NUM1;
`HI` or `HIGHEST` may take the place of NUM2.
MISSING VALUES sets user-missing values for numeric and string
variables. Long string variables may have missing values, but
characters after the first 8 bytes of the missing value must be
spaces.
Specify a list of variables, followed by a list of their user-missing
values in parentheses. Up to three discrete values may be given, or,
for numeric variables only, a range of values optionally accompanied
by a single discrete value. Ranges may be open-ended on one end,
indicated through the use of the keyword LO or LOWEST or HI or
HIGHEST.
The MISSING VALUES command takes effect immediately. It is not
affected by conditional and looping constructs such as DO IF or
LOOP.
MRSETS
MRSETS creates, modifies, deletes, and displays multiple response
sets. A multiple response set is a set of variables that represent
multiple responses to a survey question.
Multiple responses are represented in one of the two following ways:
-
A "multiple dichotomy set" is analogous to a survey question with a set of checkboxes. Each variable in the set is treated in a Boolean fashion: one value (the "counted value") means that the box was checked, and any other value means that it was not.
-
A "multiple category set" represents a survey question where the respondent is instructed to list up to N choices. Each variable represents one of the responses.
MRSETS
/MDGROUP NAME=NAME VARIABLES=VAR_LIST VALUE=VALUE
[CATEGORYLABELS={VARLABELS,COUNTEDVALUES}]
[{LABEL='LABEL',LABELSOURCE=VARLABEL}]
/MCGROUP NAME=NAME VARIABLES=VAR_LIST [LABEL='LABEL']
/DELETE NAME={[NAMES],ALL}
/DISPLAY NAME={[NAMES],ALL}
Any number of subcommands may be specified in any order.
The MDGROUP subcommand creates a new multiple dichotomy set or
replaces an existing multiple response set. The NAME, VARIABLES,
and VALUE specifications are required. The others are optional:
-
NAMEspecifies the name used in syntax for the new multiple dichotomy set. The name must begin with$; it must otherwise follow the rules for identifiers. follow the rules for identifiers. -
VARIABLESspecifies the variables that belong to the set. At least two variables must be specified. The variables must be all string or all numeric. -
VALUEspecifies the counted value. If the variables are numeric, the value must be an integer. If the variables are strings, then the value must be a string that is no longer than the shortest of the variables in the set (ignoring trailing spaces). -
CATEGORYLABELSoptionally specifies the source of the labels for each category in the set:−
VARLABELS, the default, uses variable labels or, for variables without variable labels, variable names. PSPP warns if two variables have the same variable label, since these categories cannot be distinguished in output.−
COUNTEDVALUESinstead uses each variable's value label for the counted value. PSPP warns if two variables have the same value label for the counted value or if one of the variables lacks a value label, since such categories cannot be distinguished in output. -
LABELoptionally specifies a label for the multiple response set. If neitherLABELnorLABELSOURCE=VARLABELis specified, the set is unlabeled. -
LABELSOURCE=VARLABELdraws the multiple response set's label from the first variable label among the variables in the set; if none of the variables has a label, the name of the first variable is used.LABELSOURCE=VARLABELmust be used withCATEGORYLABELS=COUNTEDVALUES. It is mutually exclusive withLABEL.
The MCGROUP subcommand creates a new multiple category set or
replaces an existing multiple response set. The NAME and
VARIABLES specifications are required, and LABEL is optional.
Their meanings are as described above in MDGROUP. PSPP warns if two
variables in the set have different value labels for a single value,
since each of the variables in the set should have the same possible
categories.
The DELETE subcommand deletes multiple response groups. A list of
groups may be named within a set of required square brackets, or ALL
may be used to delete all groups.
The DISPLAY subcommand displays information about defined multiple
response sets. Its syntax is the same as the DELETE subcommand.
Multiple response sets are saved to and read from system files by,
e.g., the SAVE and GET command. Otherwise, multiple response sets
are currently used only by third party software.
NUMERIC
NUMERIC explicitly declares new numeric variables, optionally setting
their output formats.
NUMERIC VAR_LIST [(FMT_SPEC)] [/VAR_LIST [(FMT_SPEC)]]...
Specify the names of the new numeric variables as VAR_LIST. If
you wish to set the variables' output formats, follow their names by
an output format in
an output format in
parentheses; otherwise, the default is F8.2.
Variables created with NUMERIC are initialized to the
system-missing value.
PRINT FORMATS
PRINT FORMATS VAR_LIST (FMT_SPEC) [VAR_LIST (FMT_SPEC)]....
PRINT FORMATS sets the print formats for the specified variables to
the specified format specification.
It has the same syntax as FORMATS, but PRINT FORMATS
sets only print formats, not write formats.
RENAME VARIABLES
RENAME VARIABLES changes the names of variables in the active dataset.
RENAME VARIABLES (OLD_NAMES=NEW_NAMES)... .
Specify lists of the old variable names and new variable names,
separated by an equals sign (=), within parentheses. There must be
the same number of old and new variable names. Each old variable is
renamed to the corresponding new variable name. Multiple
parenthesized groups of variables may be specified. When the old and
new variable names contain only a single variable name, the
parentheses are optional.
RENAME VARIABLES takes effect immediately. It does not cause the
data to be read.
RENAME VARIABLES may not be specified following
TEMPORARY.
SORT VARIABLES
SORT VARIABLES reorders the variables in the active dataset's
dictionary according to a chosen sort key.
SORT VARIABLES [BY]
(NAME | TYPE | FORMAT | LABEL | VALUES | MISSING | MEASURE
| ROLE | COLUMNS | ALIGNMENT | ATTRIBUTE NAME)
[(D)].
The main specification is one of the following identifiers, which determines how the variables are sorted:
-
NAME
Sorts the variables according to their names, in a case-insensitive fashion. However, when variable names differ only in a number at the end, they are sorted numerically. For example,VAR5is sorted beforeVAR400even though4precedes5. -
TYPE
Sorts numeric variables before string variables, and shorter string variables before longer ones. -
FORMAT
Groups variables by print format; within a format, sorts narrower formats before wider ones; with the same format and width, sorts fewer decimal places before more decimal places. SeePRINT FORMATS. -
LABEL
Sorts variables without a variable label before those with one. See VARIABLE LABELS. -
VALUES
Sorts variables without value labels before those with some. See VALUE LABELS. -
MISSING
Sorts variables without missing values before those with some. See MISSING VALUES. -
MEASURE
Sorts nominal variables first, followed by ordinal variables, followed by scale variables. See VARIABLE LEVEL. -
ROLE
Groups variables according to their role. See VARIABLE ROLE. -
COLUMNS
Sorts variables in ascending display width. See VARIABLE WIDTH. -
ALIGNMENT
Sorts variables according to their alignment, first left-aligned, then right-aligned, then centered. See VARIABLE ALIGNMENT. -
ATTRIBUTE NAME
Sorts variables according to the first value of theirNAMEattribute. Variables without attributes are sorted first. See VARIABLE ATTRIBUTE.
Only one sort criterion can be specified. The sort is "stable," so to sort on multiple criteria one may perform multiple sorts. For example, the following will sort primarily based on alignment, with variables that have the same alignment ordered based on display width:
SORT VARIABLES BY COLUMNS.
SORT VARIABLES BY ALIGNMENT.
Specify (D) to reverse the sort order.
STRING
STRING creates new string variables.
STRING VAR_LIST (FMT_SPEC) [/VAR_LIST (FMT_SPEC)] [...].
Specify a list of names for the variable you want to create, followed by the desired output format in parentheses. format](../language/datasets/formats/index.html) in parentheses. Variable widths are implicitly derived from the specified output formats. The created variables will be initialized to spaces.
If you want to create several variables with distinct output formats,
you can either use two or more separate STRING commands, or you can
specify further variable list and format specification pairs, each
separated from the previous by a slash (/).
The following example is one way to create three string variables; Two
of the variables have format A24 and the other A80:
STRING firstname lastname (A24) / address (A80).
Here is another way to achieve the same result:
STRING firstname lastname (A24).
STRING address (A80).
... and here is yet another way:
STRING firstname (A24).
STRING lastname (A24).
STRING address (A80).
VALUE LABELS
The values of a variable can be associated with explanatory text strings. In this way, a short value can stand for a longer, more descriptive label.
Both numeric and string variables can be given labels. For string variables, the values are case-sensitive, so that, for example, a capitalized value and its lowercase variant would have to be labeled separately if both are present in the data.
VALUE LABELS
/VAR_LIST VALUE 'LABEL' [VALUE 'LABEL']...
VALUE LABELS allows values of variables to be associated with
labels.
To set up value labels for one or more variables, specify the variable
names after a slash (/), followed by a list of values and their
associated labels, separated by spaces.
Value labels in output are normally broken into lines automatically.
Put \n in a label string to force a line break at that point. The
label may still be broken into lines at additional points.
Before VALUE LABELS is executed, any existing value labels are
cleared from the variables specified. Use ADD VALUE LABELS to add value labels without clearing
those already present.
VARIABLE ALIGNMENT
VARIABLE ALIGNMENT sets the alignment of variables for display
editing purposes. It does not affect the display of variables in PSPP
output.
VARIABLE ALIGNMENT
VAR_LIST ( LEFT | RIGHT | CENTER )
[ /VAR_LIST ( LEFT | RIGHT | CENTER ) ]
.
.
.
[ /VAR_LIST ( LEFT | RIGHT | CENTER ) ]
VARIABLE ATTRIBUTE
VARIABLE ATTRIBUTE adds, modifies, or removes user-defined attributes
associated with variables in the active dataset. Custom variable
attributes are not interpreted by PSPP, but they are saved as part of
system files and may be used by other software that reads them.
VARIABLE ATTRIBUTE
VARIABLES=VAR_LIST
ATTRIBUTE=NAME('VALUE') [NAME('VALUE')]...
ATTRIBUTE=NAME[INDEX]('VALUE') [NAME[INDEX]('VALUE')]...
DELETE=NAME [NAME]...
DELETE=NAME[INDEX] [NAME[INDEX]]...
The required VARIABLES subcommand must come first. Specify the
variables to which the following ATTRIBUTE or DELETE subcommand
should apply.
Use the ATTRIBUTE subcommand to add or modify custom variable
attributes. Specify the name of the attribute as an
identifier, followed by the desired
identifier, followed by the desired
value, in parentheses, as a quoted string. The specified attributes
are then added or modified in the variables specified on VARIABLES.
Attribute names that begin with $ are reserved for PSPP's internal
use, and attribute names that begin with @ or $@ are not displayed
by most PSPP commands that display other attributes. Other attribute
names are not treated specially.
Attributes may also be organized into arrays. To assign to an array
element, add an integer array index enclosed in square brackets ([
and ]) between the attribute name and value. Array indexes start at
1, not 0. An attribute array that has a single element (number 1) is
not distinguished from a non-array attribute.
Use the DELETE subcommand to delete an attribute from the variable
specified on VARIABLES. Specify an attribute name by itself to
delete an entire attribute, including all array elements for attribute
arrays. Specify an attribute name followed by an array index in
square brackets to delete a single element of an attribute array. In
the latter case, all the array elements numbered higher than the
deleted element are shifted down, filling the vacated position.
To associate custom attributes with the entire active dataset, instead
of with particular variables, use DATAFILE ATTRIBUTE instead.
VARIABLE ATTRIBUTE takes effect immediately. It is not affected by
conditional and looping structures such as DO IF or LOOP.
VARIABLE LABELS
Each variable can have a "label" to supplement its name. Whereas a variable name is a concise, easy-to-type mnemonic for the variable, a label may be longer and more descriptive.
VARIABLE LABELS
VARIABLE 'LABEL'
[VARIABLE 'LABEL']...
VARIABLE LABELS associates explanatory names with variables. This
name, called a "variable label", is displayed by statistical
procedures.
Specify each variable followed by its label as a quoted string.
Variable-label pairs may be separated by an optional slash /.
If a listed variable already has a label, the new one replaces it.
Specifying an empty string as the label, e.g. '', removes a label.
VARIABLE LEVEL
VARIABLE LEVEL variables ({SCALE | NOMINAL | ORDINAL})...
VARIABLE LEVEL sets the measurement
level of the listed variables.
level](../language/datasets/variables.md) of the listed variables.
VARIABLE ROLE
VARIABLE ROLE
/ROLE VAR_LIST
[/ROLE VAR_LIST]...
VARIABLE ROLE sets the intended role of a variable for use in dialog
boxes in graphical user interfaces. Each ROLE specifies one of the
following roles for the variables that follow it:
-
INPUT
An input variable, such as an independent variable. -
TARGET
An output variable, such as an dependent variable. -
BOTH
A variable used for input and output. -
NONE
No role assigned. (This is a variable's default role.) -
PARTITION
Used to break the data into groups for testing. -
SPLIT
No meaning except for certain third party software. (This role's meaning is unrelated toSPLIT FILE.)
The PSPPIRE GUI does not yet use variable roles.
VARIABLE WIDTH
VARIABLE WIDTH
VAR_LIST (width)
[ /VAR_LIST (width) ]
.
.
.
[ /VAR_LIST (width) ]
VARIABLE WIDTH sets the column width of variables for display
editing purposes. It does not affect the display of variables in the
PSPP output.
VECTOR
Two possible syntaxes:
VECTOR VEC_NAME=VAR_LIST.
VECTOR VEC_NAME_LIST(COUNT [FORMAT]).
VECTOR allows a group of variables to be accessed as if they were
consecutive members of an array with a vector(index) notation.
To make a vector out of a set of existing variables, specify a name
for the vector followed by an equals sign (=) and the variables to
put in the vector. The variables must be all numeric or all string,
and string variables must have the same width.
To make a vector and create variables at the same time, specify one or
more vector names followed by a count in parentheses. This will
create variables named VEC1 through VEC<count>. By default, the
new variables are numeric with format F8.2, but an alternate format
may be specified inside the parentheses before or after the count and
separated from it by white space or a comma. With a string format
such as A8, the variables will be string variables; with a numeric
format, they will be numeric. Variable names including the suffixes
may not exceed 64 characters in length, and none of the variables may
exist prior to VECTOR.
Vectors created with VECTOR disappear after any procedure or
procedure-like command is executed. The variables contained in the
vectors remain, unless they are scratch
variables.
variables](../language/datasets/scratch-variables.md).
Variables within a vector may be referenced in expressions using
vector(index) syntax.
WRITE FORMATS
WRITE FORMATS VAR_LIST (FMT_SPEC) [VAR_LIST (FMT_SPEC)]....
WRITE FORMATS sets the write formats for the specified variables to
the specified format specification. It has the same syntax as
FORMATS, but WRITE FORMATS sets only write formats,
not print formats.
Transforming Data
The PSPP procedures in this chapter manipulate data and prepare the active dataset for later analyses. They do not produce output.
AGGREGATE
AGGREGATE
[OUTFILE={*,'FILE_NAME',FILE_HANDLE} [MODE={REPLACE,ADDVARIABLES}]]
[/MISSING=COLUMNWISE]
[/PRESORTED]
[/DOCUMENT]
[/BREAK=VAR_LIST]
/DEST_VAR['LABEL']...=AGR_FUNC(SRC_VARS[, ARGS]...)...
AGGREGATE summarizes groups of cases into single cases. It divides
cases into groups that have the same values for one or more variables
called "break variables". Several functions are available for
summarizing case contents.
The AGGREGATE syntax consists of subcommands to control its
behavior, all of which are optional, followed by one or more
destination variable assigments, each of which uses an aggregation
function to define how it is calculated.
The OUTFILE subcommand, which must be first, names the destination
for AGGREGATE output. It may name a system file by file name or
file handle, a
dataset by its name, or * to
replace the active dataset. AGGREGATE writes its output to this
file.
With OUTFILE=* only, MODE may be specified immediately afterward
with the value ADDVARIABLES or REPLACE:
-
With
REPLACE, the default, the active dataset is replaced by a new dataset which contains just the break variables and the destination varibles. The new file contains as many cases as there are unique combinations of the break variables. -
With
ADDVARIABLES, the destination variables are added to those in the existing active dataset. Cases that have the same combination of values in their break variables receive identical values for the destination variables. The number of cases in the active dataset remains unchanged. The data must be sorted on the break variables, that is,ADDVARIABLESimpliesPRESORTED
Without OUTFILE, AGGREGATE acts as if OUTFILE=* MODE=ADDVARIABLES were specified.
By default, AGGREGATE first sorts the data on the break variables.
If the active dataset is already sorted or grouped by the break
variables, specify PRESORTED to save time. With
MODE=ADDVARIABLES, the data must be pre-sorted.
Specify DOCUMENT to copy the documents from the
active dataset into the aggregate file. Otherwise, the aggregate file
does not contain any documents, even if the aggregate file replaces
the active dataset.
Normally, AGGREGATE produces a non-missing value whenever there is
enough non-missing data for the aggregation function in use, that is,
just one non-missing value or, for the SD and SD. aggregation
functions, two non-missing values. Specify /MISSING=COLUMNWISE to
make AGGREGATE output a missing value when one or more of the input
values are missing.
The BREAK subcommand is optionally but usually present. On BREAK,
list the variables used to divide the active dataset into groups to be
summarized.
AGGREGATE is particular about the order of subcommands. OUTFILE
must be first, followed by MISSING. PRESORTED and DOCUMENT
follow MISSING, in either order, followed by BREAK, then followed
by aggregation variable specifications.
At least one set of aggregation variables is required. Each set
comprises a list of aggregation variables, an equals sign (=), the
name of an aggregation function (see the list below), and a list of
source variables in parentheses. A few aggregation functions do not
accept source variables, and some aggregation functions expect
additional arguments after the source variable names.
AGGREGATE typically creates aggregation variables with no variable
label, value labels, or missing values. Their default print and write
formats depend on the aggregation function used, with details given in
the table below. A variable label for an aggregation variable may be
specified just after the variable's name in the aggregation variable
list.
Each set must have exactly as many source variables as aggregation variables. Each aggregation variable receives the results of applying the specified aggregation function to the corresponding source variable.
The following aggregation functions may be applied only to numeric variables:
-
MEAN(VAR_NAME...)
Arithmetic mean. Limited to numeric values. The default format isF8.2. -
MEDIAN(VAR_NAME...)
The median value. Limited to numeric values. The default format isF8.2. -
SD(VAR_NAME...)
Standard deviation of the mean. Limited to numeric values. The default format isF8.2. -
SUM(VAR_NAME...)
Sum. Limited to numeric values. The default format isF8.2.These aggregation functions may be applied to numeric and string variables:
-
CGT(VAR_NAME..., VALUE)
CLT(VAR_NAME..., VALUE)
CIN(VAR_NAME..., LOW, HIGH)
COUT(VAR_NAME..., LOW, HIGH)
Total weight of cases greater than or less thanVALUEor inside or outside the closed range[LOW,HIGH], respectively. The default format isF5.3. -
FGT(VAR_NAME..., VALUE)
FLT(VAR_NAME..., VALUE)
FIN(VAR_NAME..., LOW, HIGH)
FOUT(VAR_NAME..., LOW, HIGH)
Fraction of values greater than or less thanVALUEor inside or outside the closed range[LOW,HIGH], respectively. The default format isF5.3. -
FIRST(VAR_NAME...)
LAST(VAR_NAME...)
First or last non-missing value, respectively, in break group. The aggregation variable receives the complete dictionary information from the source variable. The sort performed byAGGREGATE(and bySORT CASES) is stable. This means that the first (or last) case with particular values for the break variables before sorting is also the first (or last) case in that break group after sorting. -
MIN(VAR_NAME...)
MAX(VAR_NAME...)
Minimum or maximum value, respectively. The aggregation variable receives the complete dictionary information from the source variable. -
N(VAR_NAME...)
NMISS(VAR_NAME...)
Total weight of non-missing or missing values, respectively. The default format isF7.0if weighting is not enabled,F8.2if it is (seeWEIGHT). -
NU(VAR_NAME...)
NUMISS(VAR_NAME...)
Count of non-missing or missing values, respectively, ignoring case weights. The default format isF7.0. -
PGT(VAR_NAME..., VALUE)
PLT(VAR_NAME..., VALUE)
PIN(VAR_NAME..., LOW, HIGH)
POUT(VAR_NAME..., LOW, HIGH)
Percentage between 0 and 100 of values greater than or less thanVALUEor inside or outside the closed range[LOW,HIGH], respectively. The default format isF5.1.
These aggregation functions do not accept source variables:
-
N
Total weight of cases aggregated to form this group. The default format isF7.0if weighting is not enabled,F8.2if it is (seeWEIGHT). -
NU
Count of cases aggregated to form this group, ignoring case weights. The default format isF7.0.
Aggregation functions compare string values in terms of Unicode character codes.
The aggregation functions listed above exclude all user-missing values
from calculations. To include user-missing values, insert a period
(.) at the end of the function name. (e.g. SUM.). (Be aware that
specifying such a function as the last token on a line causes the
period to be interpreted as the end of the command.)
AGGREGATE both ignores and cancels the current SPLIT FILE settings.
Example
The personnel.sav dataset provides the occupations and salaries of
many individuals. For many purposes however such detailed information
is not interesting, but often the aggregated statistics of each
occupation are of interest. Here, the AGGREGATE command is used to
calculate the mean, the median and the standard deviation of each
occupation.
GET FILE="personnel.sav".
AGGREGATE OUTFILE=* MODE=REPLACE
/BREAK=occupation
/occ_mean_salary=MEAN(salary)
/occ_median_salary=MEDIAN(salary)
/occ_std_dev_salary=SD(salary).
LIST.
Since we chose the MODE=REPLACE option, cases for the individual
persons are no longer present. They have each been replaced by a
single case per aggregated value.
Data List
┌──────────────────┬───────────────┬─────────────────┬──────────────────┐
│ occupation │occ_mean_salary│occ_median_salary│occ_std_dev_salary│
├──────────────────┼───────────────┼─────────────────┼──────────────────┤
│Artist │ 37836.18│ 34712.50│ 7631.48│
│Baker │ 45075.20│ 45075.20│ 4411.21│
│Barrister │ 39504.00│ 39504.00│ .│
│Carpenter │ 39349.11│ 36190.04│ 7453.40│
│Cleaner │ 41142.50│ 39647.49│ 14378.98│
│Cook │ 40357.79│ 43194.00│ 11064.51│
│Manager │ 46452.14│ 45657.56│ 6901.69│
│Mathematician │ 34531.06│ 34763.06│ 5267.68│
│Painter │ 45063.55│ 45063.55│ 15159.67│
│Payload Specialist│ 34355.72│ 34355.72│ .│
│Plumber │ 40413.91│ 40410.00│ 4726.05│
│Scientist │ 36687.07│ 36803.83│ 10873.54│
│Scrientist │ 42530.65│ 42530.65│ .│
│Tailor │ 34586.79│ 34586.79│ 3728.98│
└──────────────────┴───────────────┴─────────────────┴──────────────────┘
Some values for the standard deviation are blank because there is only one case with the respective occupation.
AUTORECODE
AUTORECODE VARIABLES=SRC_VARS INTO DEST_VARS
[ /DESCENDING ]
[ /PRINT ]
[ /GROUP ]
[ /BLANK = {VALID, MISSING} ]
The AUTORECODE procedure considers the N values that a variable
takes on and maps them onto values 1...N on a new numeric variable.
Subcommand VARIABLES is the only required subcommand and must come
first. Specify VARIABLES, an equals sign (=), a list of source
variables, INTO, and a list of target variables. There must the
same number of source and target variables. The target variables must
not already exist.
AUTORECODE ordinarily assigns each increasing non-missing value of a
source variable (for a string, this is based on character code
comparisons) to consecutive values of its target variable. For
example, the smallest non-missing value of the source variable is
recoded to value 1, the next smallest to 2, and so on. If the source
variable has user-missing values, they are recoded to consecutive
values just above the non-missing values. For example, if a source
variables has seven distinct non-missing values, then the smallest
missing value would be recoded to 8, the next smallest to 9, and so
on.
Use DESCENDING to reverse the sort order for non-missing values, so
that the largest non-missing value is recoded to 1, the second-largest
to 2, and so on. Even with DESCENDING, user-missing values are
still recoded in ascending order just above the non-missing values.
The system-missing value is always recoded into the system-missing variable in target variables.
If a source value has a value label, then that value label is retained for the new value in the target variable. Otherwise, the source value itself becomes each new value's label.
Variable labels are copied from the source to target variables.
PRINT is currently ignored.
The GROUP subcommand is relevant only if more than one variable is
to be recoded. It causes a single mapping between source and target
values to be used, instead of one map per variable. With GROUP,
user-missing values are taken from the first source variable that has
any user-missing values.
If /BLANK=MISSING is given, then string variables which contain
only whitespace are recoded as SYSMIS. If /BLANK=VALID is specified
then they are allocated a value like any other. /BLANK is not
relevant to numeric values. /BLANK=VALID is the default.
AUTORECODE is a procedure. It causes the data to be read.
Example
In the file personnel.sav, the variable occupation is a string
variable. Except for data of a purely commentary nature, string
variables are generally a bad idea. One reason is that data entry
errors are easily overlooked. This has happened in personnel.sav;
one entry which should read "Scientist" has been mistyped as
"Scrientist". The syntax below shows how to correct this error in the
DO IF clause1, which then uses AUTORECODE to create a new numeric
variable which takes recoded values of occupation. Finally, we remove
the old variable and rename the new variable to the name of the old
variable:
get file='personnel.sav'.
* Correct a typing error in the original file.
do if occupation = "Scrientist".
compute occupation = "Scientist".
end if.
autorecode
variables = occupation into occ
/blank = missing.
* Delete the old variable.
delete variables occupation.
* Rename the new variable to the old variable's name.
rename variables (occ = occupation).
* Inspect the new variable.
display dictionary /variables=occupation.
Notice, in the output below, how the new variable has been automatically allocated value labels which correspond to the strings of the old variable. This means that in future analyses the descriptive strings are reported instead of the numeric values.
Variables
+----------+--------+--------------+-----+-----+---------+----------+---------+
| | | Measurement | | | | Print | Write |
|Name |Position| Level | Role|Width|Alignment| Format | Format |
+----------+--------+--------------+-----+-----+---------+----------+---------+
|occupation| 6|Unknown |Input| 8|Right |F2.0 |F2.0 |
+----------+--------+--------------+-----+-----+---------+----------+---------+
Value Labels
+---------------+------------------+
|Variable Value | Label |
+---------------+------------------+
|occupation 1 |Artist |
| 2 |Baker |
| 3 |Barrister |
| 4 |Carpenter |
| 5 |Cleaner |
| 6 |Cook |
| 7 |Manager |
| 8 |Mathematician |
| 9 |Painter |
| 10 |Payload Specialist|
| 11 |Plumber |
| 12 |Scientist |
| 13 |Tailor |
+---------------+------------------+
-
One must use care when correcting such data input errors rather than simply marking them as missing. For example, if an occupation has been entered "Barister", did the person mean "Barrister" or "Barista"? ↩
COMPUTE
COMPUTE VARIABLE = EXPRESSION.
or
COMPUTE vector(INDEX) = EXPRESSION.
COMPUTE assigns the value of an expression to a target variable.
For each case, the expression is evaluated and its value assigned to
the target variable. Numeric and string variables may be assigned.
When a string expression's width differs from the target variable's
width, the string result of the expression is truncated or padded with
spaces on the right as necessary. The expression and variable types
must match.
For numeric variables only, the target variable need not already
exist. Numeric variables created by COMPUTE are assigned an F8.2
output format. String variables must be declared before they can be
used as targets for COMPUTE.
The target variable may be specified as an element of a
vector. In this case, an expression INDEX must be
specified in parentheses following the vector name. The expression
INDEX must evaluate to a numeric value that, after rounding down to
the nearest integer, is a valid index for the named vector.
Using COMPUTE to assign to a variable specified on
LEAVE resets the variable's left state. Therefore,
LEAVE should be specified following COMPUTE, not before.
COMPUTE is a transformation. It does not cause the active dataset
to be read.
When COMPUTE is specified following TEMPORARY, the
LAG
function may not be used.
Example
The dataset physiology.sav contains the height and weight of
persons. For some purposes, neither height nor weight alone is of
interest. Epidemiologists are often more interested in the "body mass
index" which can sometimes be used as a predictor for clinical
conditions. The body mass index is defined as the weight of the
person in kilograms divided by the square of the person's height in
metres.1
get file='physiology.sav'.
* height is in mm so we must divide by 1000 to get metres.
compute bmi = weight / (height/1000)**2.
variable label bmi "Body Mass Index".
descriptives /weight height bmi.
This syntax shows how you can use COMPUTE to generate a new variable
called bmi and have every case's value calculated from the existing
values of weight and height. It also shows how you can add a
label to this new variable, so that a more
descriptive label appears in subsequent analyses, and this can be seen
in the output from the DESCRIPTIVES command, below.
The expression which follows the = sign can be as complicated as
necessary. See Expressions for
a full description of the language accepted.
Descriptive Statistics
┌─────────────────────┬──┬───────┬───────┬───────┬───────┐
│ │ N│ Mean │Std Dev│Minimum│Maximum│
├─────────────────────┼──┼───────┼───────┼───────┼───────┤
│Weight in kilograms │40│ 72.12│ 26.70│ ─55.6│ 92.1│
│Height in millimeters│40│1677.12│ 262.87│ 179│ 1903│
│Body Mass Index │40│ 67.46│ 274.08│ ─21.62│1756.82│
│Valid N (listwise) │40│ │ │ │ │
│Missing N (listwise) │ 0│ │ │ │ │
└─────────────────────┴──┴───────┴───────┴───────┴───────┘
-
Since BMI is a quantity with a ratio scale and has units, the term "index" is a misnomer, but that is what it is called. ↩
FLIP
FLIP /VARIABLES=VAR_LIST /NEWNAMES=VAR_NAME.
FLIP transposes rows and columns in the active dataset. It causes
cases to be swapped with variables, and vice versa.
All variables in the transposed active dataset are numeric. String variables take on the system-missing value in the transposed file.
N subcommands are required. If specified, the VARIABLES
subcommand selects variables to be transformed into cases, and variables
not specified are discarded. If the VARIABLES subcommand is omitted,
all variables are selected for transposition.
The variables specified by NEWNAMES, which must be a string
variable, is used to give names to the variables created by FLIP.
Only the first 8 characters of the variable are used. If NEWNAMES
is not specified then the default is a variable named CASE_LBL, if it
exists. If it does not then the variables created by FLIP are named
VAR000 through VAR999, then VAR1000, VAR1001, and so on.
When a NEWNAMES variable is available, the names must be
canonicalized before becoming variable names. Invalid characters are
replaced by letter V in the first position, or by _ in subsequent
positions. If the name thus generated is not unique, then numeric
extensions are added, starting with 1, until a unique name is found or
there are no remaining possibilities. If the latter occurs then the
FLIP operation aborts.
The resultant dictionary contains a CASE_LBL variable, a string
variable of width 8, which stores the names of the variables in the
dictionary before the transposition. Variables names longer than 8
characters are truncated. If FLIP is called again on this dataset,
the CASE_LBL variable can be passed to the NEWNAMES subcommand to
recreate the original variable names.
FLIP honors N OF CASES. It ignores
TEMPORARY, so that "temporary"
transformations become permanent.
Example
In the syntax below, data has been entered using DATA LIST such that the first
variable in the dataset is a string variable containing a description
of the other data for the case. Clearly this is not a convenient
arrangement for performing statistical analyses, so it would have been
better to think a little more carefully about how the data should have
been arranged. However often the data is provided by some third party
source, and you have no control over the form. Fortunately, we can
use FLIP to exchange the variables and cases in the active dataset.
data list notable list /heading (a16) v1 v2 v3 v4 v5 v6
begin data.
date-of-birth 1970 1989 2001 1966 1976 1982
sex 1 0 0 1 0 1
score 10 10 9 3 8 9
end data.
echo 'Before FLIP:'.
display variables.
list.
flip /variables = all /newnames = heading.
echo 'After FLIP:'.
display variables.
list.
As you can see in the results below, before the FLIP command has run
there are seven variables (six containing data and one for the
heading) and three cases. Afterwards there are four variables (one
per case, plus the CASE_LBL variable) and six cases. You can delete
the CASE_LBL variable (see DELETE VARIABLES) if
you don't need it.
Before FLIP:
Variables
┌───────┬────────┬────────────┬────────────┐
│Name │Position│Print Format│Write Format│
├───────┼────────┼────────────┼────────────┤
│heading│ 1│A16 │A16 │
│v1 │ 2│F8.2 │F8.2 │
│v2 │ 3│F8.2 │F8.2 │
│v3 │ 4│F8.2 │F8.2 │
│v4 │ 5│F8.2 │F8.2 │
│v5 │ 6│F8.2 │F8.2 │
│v6 │ 7│F8.2 │F8.2 │
└───────┴────────┴────────────┴────────────┘
Data List
┌─────────────┬───────┬───────┬───────┬───────┬───────┬───────┐
│ heading │ v1 │ v2 │ v3 │ v4 │ v5 │ v6 │
├─────────────┼───────┼───────┼───────┼───────┼───────┼───────┤
│date─of─birth│1970.00│1989.00│2001.00│1966.00│1976.00│1982.00│
│sex │ 1.00│ .00│ .00│ 1.00│ .00│ 1.00│
│score │ 10.00│ 10.00│ 9.00│ 3.00│ 8.00│ 9.00│
└─────────────┴───────┴───────┴───────┴───────┴───────┴───────┘
After FLIP:
Variables
┌─────────────┬────────┬────────────┬────────────┐
│Name │Position│Print Format│Write Format│
├─────────────┼────────┼────────────┼────────────┤
│CASE_LBL │ 1│A8 │A8 │
│date_of_birth│ 2│F8.2 │F8.2 │
│sex │ 3│F8.2 │F8.2 │
│score │ 4│F8.2 │F8.2 │
└─────────────┴────────┴────────────┴────────────┘
Data List
┌────────┬─────────────┬────┬─────┐
│CASE_LBL│date_of_birth│ sex│score│
├────────┼─────────────┼────┼─────┤
│v1 │ 1970.00│1.00│10.00│
│v2 │ 1989.00│ .00│10.00│
│v3 │ 2001.00│ .00│ 9.00│
│v4 │ 1966.00│1.00│ 3.00│
│v5 │ 1976.00│ .00│ 8.00│
│v6 │ 1982.00│1.00│ 9.00│
└────────┴─────────────┴────┴─────┘
IF
IF CONDITION VARIABLE=EXPRESSION.
or
IF CONDITION vector(INDEX)=EXPRESSION.
The IF transformation evaluates a test expression and, if it is
true, assigns the value of a target expression to a target variable.
Specify a boolean-valued test
expression to be tested following the
expression to be tested following the
IF keyword. The test expression is evaluated for each case:
-
If it is true, then the target expression is evaluated and assigned to the specified variable.
-
If it is false or missing, nothing is done.
Numeric and string variables may be assigned. When a string expression's width differs from the target variable's width, the string result is truncated or padded with spaces on the right as necessary. The expression and variable types must match.
The target variable may be specified as an element of a vector. In this case, a vector index expression must be specified in parentheses following the vector name. The index expression must evaluate to a numeric value that, after rounding down to the nearest integer, is a valid index for the named vector.
Using IF to assign to a variable specified on LEAVE
resets the variable's left state. Therefore, use LEAVE after IF,
not before.
When IF follows TEMPORARY, the
LAG function
may not be used.
RECODE
The RECODE command is used to transform existing values into other,
user specified values. The general form is:
RECODE SRC_VARS
(SRC_VALUE SRC_VALUE ... = DEST_VALUE)
(SRC_VALUE SRC_VALUE ... = DEST_VALUE)
(SRC_VALUE SRC_VALUE ... = DEST_VALUE) ...
[INTO DEST_VARS].
Following the RECODE keyword itself comes SRC_VARS, a list of
variables whose values are to be transformed. These variables must
all string or all numeric variables.
After the list of source variables, there should be one or more
"mappings". Each mapping is enclosed in parentheses, and contains the
source values and a destination value separated by a single =. The
source values are used to specify the values in the dataset which need
to change, and the destination value specifies the new value to which
they should be changed. Each SRC_VALUE may take one of the following
forms:
-
NUMBER(numeric source variables only)
Matches a number. -
STRING(string source variables only)
Matches a string enclosed in single or double quotes. -
NUM1 THRU NUM2(numeric source variables only)
Matches all values in the range betweenNUM1andNUM2, including both endpoints of the range.NUM1should be less thanNUM2. Open-ended ranges may be specified usingLOorLOWESTforNUM1orHIorHIGHESTforNUM2. -
MISSING
Matches system missing and user missing values. -
SYSMIS(numeric source variables only)
Match system-missing values. -
ELSE
Matches any values that are not matched by any otherSRC_VALUE. This should appear only as the last mapping in the command.
After the source variables comes an = and then the DEST_VALUE,
which may take any of the following forms:
-
NUMBER(numeric destination variables only)
A literal numeric value to which the source values should be changed. -
STRING(string destination variables only)
A literal string value (enclosed in quotation marks) to which the source values should be changed. This implies the destination variable must be a string variable. -
SYSMIS(numeric destination variables only)
The keywordSYSMISchanges the value to the system missing value. This implies the destination variable must be numeric. -
COPY
The special keywordCOPYmeans that the source value should not be modified, but copied directly to the destination value. This is meaningful only ifINTO DEST_VARSis specified.
Mappings are considered from left to right. Therefore, if a value is
matched by a SRC_VALUE from more than one mapping, the first
(leftmost) mapping which matches is considered. Any subsequent
matches are ignored.
The clause INTO DEST_VARS is optional. The behaviour of the command
is slightly different depending on whether it appears or not:
-
Without
INTO DEST_VARS, then values are recoded "in place". This means that the recoded values are written back to the source variables from whence the original values came. In this case, the DEST_VALUE for every mapping must imply a value which has the same type as the SRC_VALUE. For example, if the source value is a string value, it is not permissible for DEST_VALUE to beSYSMISor another forms which implies a numeric result. It is also not permissible for DEST_VALUE to be longer than the width of the source variable.The following example recodes two numeric variables
xandyin place. 0 becomes 99, the values 1 to 10 inclusive are unchanged, values 1000 and higher are recoded to the system-missing value, and all other values are changed to 999:RECODE x y (0 = 99) (1 THRU 10 = COPY) (1000 THRU HIGHEST = SYSMIS) (ELSE = 999). -
With
INTO DEST_VARS, recoded values are written into the variables specified inDEST_VARS, which must therefore contain a list of valid variable names. The number of variables inDEST_VARSmust be the same as the number of variables inSRC_VARSand the respective order of the variables inDEST_VARScorresponds to the order ofSRC_VARS. That is to say, the recoded value whose original value came from the Nth variable inSRC_VARSis placed into the Nth variable inDEST_VARS. The source variables are unchanged. If any mapping implies a string as its destination value, then the respective destination variable must already exist, or have been declared usingSTRINGor another transformation. Numeric variables however are automatically created if they don't already exist.The following example deals with two source variables,
aandbwhich contain string values. Hence there are two destination variablesv1andv2. Any cases whereaorbcontain the valuesapple,pearorpomegranateresult inv1orv2being filled with the stringfruitwhilst cases withtomato,lettuceorcarrotresult invegetable. Other values produce the resultunknown:STRING v1 (A20). STRING v2 (A20). RECODE a b ("apple" "pear" "pomegranate" = "fruit") ("tomato" "lettuce" "carrot" = "vegetable") (ELSE = "unknown") INTO v1 v2.
There is one special mapping, not mentioned above. If the source
variable is a string variable then a mapping may be specified as
(CONVERT). This mapping, if it appears must be the last mapping
given and the INTO DEST_VARS clause must also be given and must not
refer to a string variable. CONVERT causes a number specified as a
string to be converted to a numeric value. For example it converts
the string "3" into the numeric value 3 (note that it does not
convert three into 3). If the string cannot be parsed as a number,
then the system-missing value is assigned instead. In the following
example, cases where the value of x (a string variable) is the empty
string, are recoded to 999 and all others are converted to the numeric
equivalent of the input value. The results are placed into the
numeric variable y:
RECODE x ("" = 999) (CONVERT) INTO y.
It is possible to specify multiple recodings on a single command.
Introduce additional recodings with a slash (/) to separate them from
the previous recodings:
RECODE
a (2 = 22) (ELSE = 99)
/b (1 = 3) INTO z.
Here we have two recodings. The first affects the source variable a
and recodes in-place the value 2 into 22 and all other values to 99.
The second recoding copies the values of b into the variable z,
changing any instances of 1 into 3.
SORT CASES
SORT CASES BY VAR_LIST[({D|A}] [ VAR_LIST[({D|A}] ] ...
SORT CASES sorts the active dataset by the values of one or more
variables.
Specify BY and a list of variables to sort by. By default,
variables are sorted in ascending order. To override sort order,
specify (D) or (DOWN) after a list of variables to get descending
order, or (A) or (UP) for ascending order. These apply to all the
listed variables up until the preceding (A), (D), (UP) or
(DOWN).
SORT CASES performs a stable sort, meaning that records with equal
values of the sort variables have the same relative order before and
after sorting. Thus, re-sorting an already sorted file does not
affect the ordering of cases.
SORT CASES is a procedure. It causes the data to be read.
SORT CASES attempts to sort the entire active dataset in main
memory. If workspace is exhausted, it falls back to a merge sort
algorithm which creates numerous temporary files.
SORT CASES may not be specified following TEMPORARY.
Example
In the syntax below, the data from the file physiology.sav is sorted
by two variables, viz sex in descending order and temperature in
ascending order.
get file='physiology.sav'.
sort cases by sex (D) temperature(A).
list.
In the output below, you can see that all the cases with a sex of
1 (female) appear before those with a sex of 0 (male). This is
because they have been sorted in descending order. Within each sex,
the data is sorted on the temperature variable, this time in ascending
order.
Data List
┌───┬──────┬──────┬───────────┐
│sex│height│weight│temperature│
├───┼──────┼──────┼───────────┤
│ 1│ 1606│ 56.1│ 34.56│
│ 1│ 179│ 56.3│ 35.15│
│ 1│ 1609│ 55.4│ 35.46│
│ 1│ 1606│ 56.0│ 36.06│
│ 1│ 1607│ 56.3│ 36.26│
│ 1│ 1604│ 56.0│ 36.57│
│ 1│ 1604│ 56.6│ 36.81│
│ 1│ 1606│ 56.3│ 36.88│
│ 1│ 1604│ 57.8│ 37.32│
│ 1│ 1598│ 55.6│ 37.37│
│ 1│ 1607│ 55.9│ 37.84│
│ 1│ 1605│ 54.5│ 37.86│
│ 1│ 1603│ 56.1│ 38.80│
│ 1│ 1604│ 58.1│ 38.85│
│ 1│ 1605│ 57.7│ 38.98│
│ 1│ 1709│ 55.6│ 39.45│
│ 1│ 1604│ -55.6│ 39.72│
│ 1│ 1601│ 55.9│ 39.90│
│ 0│ 1799│ 90.3│ 32.59│
│ 0│ 1799│ 89.0│ 33.61│
│ 0│ 1799│ 90.6│ 34.04│
│ 0│ 1801│ 90.5│ 34.42│
│ 0│ 1802│ 87.7│ 35.03│
│ 0│ 1793│ 90.1│ 35.11│
│ 0│ 1801│ 92.1│ 35.98│
│ 0│ 1800│ 89.5│ 36.10│
│ 0│ 1645│ 92.1│ 36.68│
│ 0│ 1698│ 90.2│ 36.94│
│ 0│ 1800│ 89.6│ 37.02│
│ 0│ 1800│ 88.9│ 37.03│
│ 0│ 1801│ 88.9│ 37.12│
│ 0│ 1799│ 90.4│ 37.33│
│ 0│ 1903│ 91.5│ 37.52│
│ 0│ 1799│ 90.9│ 37.53│
│ 0│ 1800│ 91.0│ 37.60│
│ 0│ 1799│ 90.4│ 37.68│
│ 0│ 1801│ 91.7│ 38.98│
│ 0│ 1801│ 90.9│ 39.03│
│ 0│ 1799│ 89.3│ 39.77│
│ 0│ 1884│ 88.6│ 39.97│
└───┴──────┴──────┴───────────┘
SORT CASES affects only the active file. It does not have any
effect upon the physiology.sav file itself. For that, you would
have to rewrite the file using the SAVE command.
Selecting Data
This chapter documents PSPP commands that temporarily or permanently select data records from the active dataset for analysis.
FILTER
FILTER BY VAR_NAME.
FILTER OFF.
FILTER allows a boolean-valued variable to be used to select cases
from the data stream for processing.
To set up filtering, specify BY and a variable name. Keyword BY is
optional but recommended. Cases which have a zero or system- or
user-missing value are excluded from analysis, but not deleted from
the data stream. Cases with other values are analyzed. To filter
based on a different condition, use transformations such as COMPUTE
or RECODE to compute a filter variable of the required form, then
specify that variable on FILTER.
FILTER OFF turns off case filtering.
Filtering takes place immediately before cases pass to a procedure for
analysis. Only one filter variable may be active at a time.
Normally, case filtering continues until it is explicitly turned off
with FILTER OFF. However, if FILTER is placed after TEMPORARY,
it filters only the next procedure or procedure-like command.
N OF CASES
N [OF CASES] NUM_OF_CASES [ESTIMATED].
N OF CASES limits the number of cases processed by any procedures
that follow it in the command stream. N OF CASES 100, for example,
tells PSPP to disregard all cases after the first 100.
When N OF CASES is specified after TEMPORARY, it
affects only the next procedure. Otherwise, cases beyond the limit
specified are not processed by any later procedure.
If the limit specified on N OF CASES is greater than the number of
cases in the active dataset, it has no effect.
When N OF CASES is used along with SAMPLE or SELECT IF, the
case limit is applied to the cases obtained after sampling or case
selection, regardless of how N OF CASES is placed relative to SAMPLE
or SELECT IF in the command file. Thus, the commands N OF CASES 100
and SAMPLE .5 both randomly sample approximately half of the active
dataset's cases, then select the first 100 of those sampled, regardless
of their order in the command file.
N OF CASES with the ESTIMATED keyword gives an estimated number of
cases before DATA LIST or another command to read in data.
ESTIMATED never limits the number of cases processed by procedures.
PSPP currently does not use case count estimates.
SAMPLE
SAMPLE NUM1 [FROM NUM2].
SAMPLE randomly samples a proportion of the cases in the active
file. Unless it follows TEMPORARY, it permanently removes cases
from the active dataset.
The proportion to sample may be expressed as a single number between 0
and 1. If N is the number of currently-selected cases in the active
dataset, then SAMPLE K. will select approximately K×N cases.
The proportion to sample can also be specified in the style SAMPLE M FROM N. With this style, cases are selected as follows:
-
If
Nis the number of currently-selected cases in the active dataset, exactlyMcases are selected. -
If
Nis greater than the number of currently-selected cases in the active dataset, an equivalent proportion of cases are selected. -
If
Nis less than the number of currently-selected cases in the active, exactlyMcases are selected from the firstNcases in the active dataset.
SAMPLE and SELECT IF are performed in the order specified by the
syntax file.
SAMPLE is always performed before N OF CASES, regardless
of ordering in the syntax file.
The same values for SAMPLE may result in different samples. To
obtain the same sample, use the SET command to set the random number
seed to the same value before each SAMPLE. Different samples may
still result when the file is processed on systems with different
machine types or PSPP versions. By default, the random number seed is
based on the system time.
SELECT IF
SELECT IF EXPRESSION.
SELECT IF selects cases for analysis based on the value of
EXPRESSION. Cases not selected are permanently eliminated from the
active dataset, unless TEMPORARY is in effect.
Specify a boolean expression. If expression](../language/expressions/index.md#boolean-values). If the expression is true for a particular case, the case is analyzed. If the expression is false or missing, then the case is deleted from the data stream.
Place SELECT IF early in the command file. Cases that are deleted
early can be processed more efficiently in time and space. Once cases
have been deleted from the active dataset using SELECT IF they
cannot be re-instated. If you want to be able to re-instate cases,
then use FILTER instead.
When SELECT IF is specified following TEMPORARY,
the LAG
the LAG
function may not be used.
Example
A shop steward is interested in the salaries of younger personnel in a
firm. The file personnel.sav provides the salaries of all the
workers and their dates of birth. The syntax below shows how SELECT IF can be used to limit analysis only to those persons born after
December 31, 1999.
get file = 'personnel.sav'.
echo 'Salaries of all personnel'.
descriptives salary.
echo 'Salaries of personnel born after December 31 1999'.
select if dob > date.dmy (31,12,1999).
descriptives salary.
From the output shown below, one can see that there are 56 persons listed in the dataset, and 17 of them were born after December 31, 1999.
Salaries of all personnel
Descriptive Statistics
┌────────────────────────┬──┬────────┬───────┬───────┬───────┐
│ │ N│ Mean │Std Dev│Minimum│Maximum│
├────────────────────────┼──┼────────┼───────┼───────┼───────┤
│Annual salary before tax│56│40028.97│8721.17│$23,451│$57,044│
│Valid N (listwise) │56│ │ │ │ │
│Missing N (listwise) │ 0│ │ │ │ │
└────────────────────────┴──┴────────┴───────┴───────┴───────┘
Salaries of personnel born after December 31 1999
Descriptive Statistics
┌────────────────────────┬──┬────────┬───────┬───────┬───────┐
│ │ N│ Mean │Std Dev│Minimum│Maximum│
├────────────────────────┼──┼────────┼───────┼───────┼───────┤
│Annual salary before tax│17│31828.59│4454.80│$23,451│$39,504│
│Valid N (listwise) │17│ │ │ │ │
│Missing N (listwise) │ 0│ │ │ │ │
└────────────────────────┴──┴────────┴───────┴───────┴───────┘
Note that the personnel.sav file from which the data were read is
unaffected. The transformation affects only the active file.
SPLIT FILE
SPLIT FILE [{LAYERED, SEPARATE}] BY VAR_LIST.
SPLIT FILE OFF.
SPLIT FILE allows multiple sets of data present in one data file to
be analyzed separately using single statistical procedure commands.
Specify a list of variable names to analyze multiple sets of data separately. Groups of adjacent cases having the same values for these variables are analyzed by statistical procedure commands as one group. An independent analysis is carried out for each group of cases, and the variable values for the group are printed along with the analysis.
When a list of variable names is specified, one of the keywords
LAYERED or SEPARATE may also be specified. With LAYERED, which
is the default, the separate analyses for each group are presented
together in a single table. With SEPARATE, each analysis is
presented in a separate table. Not all procedures honor the
distinction.
Groups are formed only by adjacent cases. To create a split using a variable where like values are not adjacent in the working file, first sort the data by that variable.
Specify OFF to disable SPLIT FILE and resume analysis of the
entire active dataset as a single group of data.
When SPLIT FILE is specified after TEMPORARY, it
affects only the next procedure.
Example
The file horticulture.sav contains data describing the yield of a
number of horticultural specimens which have been subjected to various
treatments. If we wanted to investigate linear statistics of the
yeild, one way to do this is using DESCRIPTIVES.
However, it is reasonable to expect the mean to be different depending
on the treatment. So we might want to perform three separate
procedures -- one for each treatment.1 The following syntax shows
how this can be done automatically using the SPLIT FILE command.
get file='horticulture.sav'.
* Ensure cases are sorted before splitting.
sort cases by treatment.
split file by treatment.
* Run descriptives on the yield variable
descriptives /variable = yield.
In the following output, you can see that the table of descriptive
statistics appears 3 times—once for each value of treatment. In this
example N, the number of observations are identical in all splits.
This is because that experiment was deliberately designed that way.
However in general one can expect a different N for each split.
Split Values
┌─────────┬───────┐
│Variable │ Value │
├─────────┼───────┤
│treatment│control│
└─────────┴───────┘
Descriptive Statistics
┌────────────────────┬──┬─────┬───────┬───────┬───────┐
│ │ N│ Mean│Std Dev│Minimum│Maximum│
├────────────────────┼──┼─────┼───────┼───────┼───────┤
│yield │30│51.23│ 8.28│ 37.86│ 68.59│
│Valid N (listwise) │30│ │ │ │ │
│Missing N (listwise)│ 0│ │ │ │ │
└────────────────────┴──┴─────┴───────┴───────┴───────┘
Split Values
┌─────────┬────────────┐
│Variable │ Value │
├─────────┼────────────┤
│treatment│conventional│
└─────────┴────────────┘
Descriptive Statistics
┌────────────────────┬──┬─────┬───────┬───────┬───────┐
│ │ N│ Mean│Std Dev│Minimum│Maximum│
├────────────────────┼──┼─────┼───────┼───────┼───────┤
│yield │30│53.57│ 8.92│ 36.30│ 70.66│
│Valid N (listwise) │30│ │ │ │ │
│Missing N (listwise)│ 0│ │ │ │ │
└────────────────────┴──┴─────┴───────┴───────┴───────┘
Split Values
┌─────────┬───────────┐
│Variable │ Value │
├─────────┼───────────┤
│treatment│traditional│
└─────────┴───────────┘
Descriptive Statistics
┌────────────────────┬──┬─────┬───────┬───────┬───────┐
│ │ N│ Mean│Std Dev│Minimum│Maximum│
├────────────────────┼──┼─────┼───────┼───────┼───────┤
│yield │30│56.87│ 8.88│ 39.08│ 75.93│
│Valid N (listwise) │30│ │ │ │ │
│Missing N (listwise)│ 0│ │ │ │ │
└────────────────────┴──┴─────┴───────┴───────┴───────┘
Example 13.3: The results of running DESCRIPTIVES with an active split
Unless TEMPORARY was used, after a split has been defined for a
dataset it remains active until explicitly disabled.
-
There are other, possibly better, ways to achieve a similar result using the
MEANSorEXAMINEcommands. ↩
TEMPORARY
TEMPORARY.
TEMPORARY is used to make the effects of transformations following
its execution temporary. These transformations affect only the
execution of the next procedure or procedure-like command. Their
effects are not be saved to the active dataset.
The only specification on TEMPORARY is the command name.
TEMPORARY may not appear within a DO IF or LOOP construct. It
may appear only once between procedures and procedure-like commands.
Scratch variables cannot be used following TEMPORARY.
Example
In the syntax below, there are two COMPUTE transformation. One of
them immediately follows a TEMPORARY command, and therefore affects
only the next procedure, which in this case is the first
DESCRIPTIVES command.
data list notable /x 1-2.
begin data.
2
4
10
15
20
24
end data.
compute x=x/2.
temporary.
compute x=x+3.
descriptives x.
descriptives x.
The data read by the first DESCRIPTIVES procedure are 4, 5, 8, 10.5,
13, 15. The data read by the second DESCRIPTIVES procedure are 1,
2, 5, 7.5, 10, 12. This is because the second COMPUTE
transformation has no effect on the second DESCRIPTIVES procedure.
You can check these figures in the following output.
Descriptive Statistics
┌────────────────────┬─┬────┬───────┬───────┬───────┐
│ │N│Mean│Std Dev│Minimum│Maximum│
├────────────────────┼─┼────┼───────┼───────┼───────┤
│x │6│9.25│ 4.38│ 4│ 15│
│Valid N (listwise) │6│ │ │ │ │
│Missing N (listwise)│0│ │ │ │ │
└────────────────────┴─┴────┴───────┴───────┴───────┘
Descriptive Statistics
┌────────────────────┬─┬────┬───────┬───────┬───────┐
│ │N│Mean│Std Dev│Minimum│Maximum│
├────────────────────┼─┼────┼───────┼───────┼───────┤
│x │6│6.25│ 4.38│ 1│ 12│
│Valid N (listwise) │6│ │ │ │ │
│Missing N (listwise)│0│ │ │ │ │
└────────────────────┴─┴────┴───────┴───────┴───────┘
WEIGHT
WEIGHT BY VAR_NAME.
WEIGHT OFF.
WEIGHT assigns cases varying weights, changing the frequency
distribution of the active dataset. Execution of WEIGHT is delayed
until data have been read.
If a variable name is specified, WEIGHT causes the values of that
variable to be used as weighting factors for subsequent statistical
procedures. Use of keyword BY is optional but recommended.
Weighting variables must be numeric. Scratch
variables may not be
variables](../language/datasets/scratch-variables.md) may not be
used for weighting.
When OFF is specified, subsequent statistical procedures weight all
cases equally.
A positive integer weighting factor W on a case yields the same
statistical output as would replicating the case W times. A
weighting factor of 0 is treated for statistical purposes as if the
case did not exist in the input. Weighting values need not be
integers, but negative and system-missing values for the weighting
variable are interpreted as weighting factors of 0. User-missing
values are not treated specially.
When WEIGHT is specified after TEMPORARY, it
affects only the next procedure.
WEIGHT does not cause cases in the active dataset to be replicated
in memory.
Example
One could define a dataset containing an inventory of stock items. It would be reasonable to use a string variable for a description of the item, and a numeric variable for the number in stock, like in the syntax below.
data list notable list /item (a16) quantity (f8.0).
begin data
nuts 345
screws 10034
washers 32012
bolts 876
end data.
echo 'Unweighted frequency table'.
frequencies /variables = item /format=dfreq.
weight by quantity.
echo 'Weighted frequency table'.
frequencies /variables = item /format=dfreq.
One analysis which most surely would be of interest is the relative
amounts or each item in stock. However without setting a weight
variable, FREQUENCIES does not tell
us what we want to know, since there is only one case for each stock
item. The output below shows the difference between the weighted and
unweighted frequency tables.
Unweighted frequency table
item
┌─────────────┬─────────┬───────┬─────────────┬──────────────────┐
│ │Frequency│Percent│Valid Percent│Cumulative Percent│
├─────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Valid bolts │ 1│ 25.0%│ 25.0%│ 25.0%│
│ nuts │ 1│ 25.0%│ 25.0%│ 50.0%│
│ screws │ 1│ 25.0%│ 25.0%│ 75.0%│
│ washers│ 1│ 25.0%│ 25.0%│ 100.0%│
├─────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Total │ 4│ 100.0%│ │ │
└─────────────┴─────────┴───────┴─────────────┴──────────────────┘
Weighted frequency table
item
┌─────────────┬─────────┬───────┬─────────────┬──────────────────┐
│ │Frequency│Percent│Valid Percent│Cumulative Percent│
├─────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Valid washers│ 32012│ 74.0%│ 74.0%│ 74.0%│
│ screws │ 10034│ 23.2%│ 23.2%│ 97.2%│
│ bolts │ 876│ 2.0%│ 2.0%│ 99.2%│
│ nuts │ 345│ .8%│ .8%│ 100.0%│
├─────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Total │ 43267│ 100.0%│ │ │
└─────────────┴─────────┴───────┴─────────────┴──────────────────┘
Conditionals and Loops
This chapter documents PSPP commands used for conditional execution, looping, and flow of control.
BREAK
BREAK.
BREAK terminates execution of the innermost currently executing
LOOP construct.
BREAK is allowed only inside LOOP...END LOOP.
DEFINE…!ENDDEFINE
- Overview
- Introduction
- Macro Bodies
- Macro Arguments
- Controlling Macro Expansion
- Macro Functions
- Macro Expressions
- Macro Conditional Expansion
- Macro Loops
- Macro Variable Assignment
- Macro Settings
- Additional Notes
Overview
DEFINE macro_name([argument[/argument]...])
...body...
!ENDDEFINE.
Each argument takes the following form:
{!arg_name= | !POSITIONAL}
[!DEFAULT(default)]
[!NOEXPAND]
{!TOKENS(count) | !CHAREND('token') | !ENCLOSE('start' | 'end') | !CMDEND}
The following directives may be used within body:
!OFFEXPAND
!ONEXPAND
The following functions may be used within the body:
!BLANKS(count)
!CONCAT(arg...)
!EVAL(arg)
!HEAD(arg)
!INDEX(haystack, needle)
!LENGTH(arg)
!NULL
!QUOTE(arg)
!SUBSTR(arg, start[, count])
!TAIL(arg)
!UNQUOTE(arg)
!UPCASE(arg)
The body may also include the following constructs:
!IF (condition) !THEN true-expansion !ENDIF
!IF (condition) !THEN true-expansion !ELSE false-expansion !ENDIF
!DO !var = start !TO end [!BY step]
body
!DOEND
!DO !var !IN (expression)
body
!DOEND
!LET !var = expression
Introduction
The DEFINE command creates a "macro", which is a name for a fragment of PSPP syntax called the macro's "body". Following the DEFINE command, syntax may "call" the macro by name any number of times. Each call substitutes, or "expands", the macro's body in place of the call, as if the body had been written in its place.
The following syntax defines a macro named !vars that expands to
the variable names v1 v2 v3. The macro's name begins with !, which
is optional for macro names. The () following the macro name are
required:
DEFINE !vars()
v1 v2 v3
!ENDDEFINE.
Here are two ways that !vars might be called given the preceding
definition:
DESCRIPTIVES !vars.
FREQUENCIES /VARIABLES=!vars.
With macro expansion, the above calls are equivalent to the following:
DESCRIPTIVES v1 v2 v3.
FREQUENCIES /VARIABLES=v1 v2 v3.
The !vars macro expands to a fixed body. Macros may have more
sophisticated contents:
-
Macro "arguments" that are substituted into the body whenever they are named. The values of a macro's arguments are specified each time it is called.
-
Macro "functions", expanded when the macro is called.
-
!IFconstructs, for conditional expansion. -
Two forms of
!DOconstruct, for looping over a numerical range or a collection of tokens. -
!LETconstructs, for assigning to macro variables.
Many identifiers associated with macros begin with !, a character
not normally allowed in identifiers. These identifiers are reserved
only for use with macros, which helps keep them from being confused with
other kinds of identifiers.
The following sections provide more details on macro syntax and semantics.
Macro Bodies
As previously shown, a macro body may contain a fragment of a PSPP command (such as a variable name). A macro body may also contain full PSPP commands. In the latter case, the macro body should also contain the command terminators.
Most PSPP commands may occur within a macro. The DEFINE command
itself is one exception, because the inner !ENDDEFINE ends the outer
macro definition. For compatibility, BEGIN DATA...END DATA.
should not be used within a macro.
The body of a macro may call another macro. The following shows one way that could work:
DEFINE !commands()
DESCRIPTIVES !vars.
FREQUENCIES /VARIABLES=!vars.
!ENDDEFINE.
* Initially define the 'vars' macro to analyze v1...v3.
DEFINE !vars() v1 v2 v3 !ENDDEFINE.
!commands
* Redefine 'vars' macro to analyze different variables.
DEFINE !vars() v4 v5 !ENDDEFINE.
!commands
The !commands macro would be easier to use if it took the variables
to analyze as an argument rather than through another macro. The
following section shows how to do that.
Macro Arguments
This section explains how to use macro arguments. As an initial
example, the following syntax defines a macro named !analyze that
takes all the syntax up to the first command terminator as an argument:
DEFINE !analyze(!POSITIONAL !CMDEND)
DESCRIPTIVES !1.
FREQUENCIES /VARIABLES=!1.
!ENDDEFINE.
When !analyze is called, it expands to a pair of analysis commands
with each !1 in the body replaced by the argument. That is, these
calls:
!analyze v1 v2 v3.
!analyze v4 v5.
act like the following:
DESCRIPTIVES v1 v2 v3.
FREQUENCIES /VARIABLES=v1 v2 v3.
DESCRIPTIVES v4 v5.
FREQUENCIES /VARIABLES=v4 v5.
Macros may take any number of arguments, described within the parentheses in the DEFINE command. Arguments come in two varieties based on how their values are specified when the macro is called:
-
A "positional" argument has a required value that follows the macro's name. Use the
!POSITIONALkeyword to declare a positional argument.When a macro is called, the positional argument values appear in the same order as their definitions, before any keyword argument values.
References to a positional argument in a macro body are numbered:
!1is the first positional argument,!2the second, and so on. In addition,!*expands to all of the positional arguments' values, separated by spaces.The following example uses a positional argument:
DEFINE !analyze(!POSITIONAL !CMDEND) DESCRIPTIVES !1. FREQUENCIES /VARIABLES=!1. !ENDDEFINE. !analyze v1 v2 v3. !analyze v4 v5. -
A "keyword" argument has a name. In the macro call, its value is specified with the syntax
name=value. The names allow keyword argument values to take any order in the call.In declaration and calls, a keyword argument's name may not begin with
!, but references to it in the macro body do start with a leading!.The following example uses a keyword argument that defaults to ALL if the argument is not assigned a value:
DEFINE !analyze_kw(vars=!DEFAULT(ALL) !CMDEND) DESCRIPTIVES !vars. FREQUENCIES /VARIABLES=!vars. !ENDDEFINE. !analyze_kw vars=v1 v2 v3. /* Analyze specified variables. !analyze_kw. /* Analyze all variables.
If a macro has both positional and keyword arguments, then the
positional arguments must come first in the DEFINE command, and their
values also come first in macro calls. A keyword argument may be
omitted by leaving its keyword out of the call, and a positional
argument may be omitted by putting a command terminator where it would
appear. (The latter case also omits any following positional
arguments and all keyword arguments, if there are any.) When an
argument is omitted, a default value is used: either the value
specified in !DEFAULT(value), or an empty value otherwise.
Each argument declaration specifies the form of its value:
-
!TOKENS(count)
Exactlycounttokens, e.g.!TOKENS(1)for a single token. Each identifier, number, quoted string, operator, or punctuator is a token (see Tokens for details).The following variant of
!analyze_kwaccepts only a single variable name (orALL) as its argument:DEFINE !analyze_one_var(!POSITIONAL !TOKENS(1)) DESCRIPTIVES !1. FREQUENCIES /VARIABLES=!1. !ENDDEFINE. !analyze_one_var v1. -
!CHAREND('TOKEN')
Any number of tokens up toTOKEN, which should be an operator or punctuator token such as/or+. TheTOKENdoes not become part of the value.With the following variant of
!analyze_kw, the variables must be following by/:DEFINE !analyze_parens(vars=!CHARNED('/')) DESCRIPTIVES !vars. FREQUENCIES /VARIABLES=!vars. !ENDDEFINE. !analyze_parens vars=v1 v2 v3/. -
!ENCLOSE('START','END')
Any number of tokens enclosed betweenSTARTandEND, which should each be operator or punctuator tokens. For example, use!ENCLOSE('(',')')for a value enclosed within parentheses. (Such a value could never have right parentheses inside it, even paired with left parentheses.) The start and end tokens are not part of the value.With the following variant of
!analyze_kw, the variables must be specified within parentheses:DEFINE !analyze_parens(vars=!ENCLOSE('(',')')) DESCRIPTIVES !vars. FREQUENCIES /VARIABLES=!vars. !ENDDEFINE. !analyze_parens vars=(v1 v2 v3). -
!CMDEND
Any number of tokens up to the end of the command. This should be used only for the last positional parameter, since it consumes all of the tokens in the command calling the macro.The following variant of
!analyze_kwtakes all the variable names up to the end of the command as its argument:DEFINE !analyze_kw(vars=!CMDEND) DESCRIPTIVES !vars. FREQUENCIES /VARIABLES=!vars. !ENDDEFINE. !analyze_kw vars=v1 v2 v3.
By default, when an argument's value contains a macro call, the call
is expanded each time the argument appears in the macro's body. The
!NOEXPAND keyword in an argument
declaration suppresses this expansion.
Controlling Macro Expansion
Multiple factors control whether macro calls are expanded in different
situations. At the highest level, SET MEXPAND controls whether
macro calls are expanded. By default, it is enabled. SET MEXPAND, for details.
A macro body may contain macro calls. By default, these are expanded.
If a macro body contains !OFFEXPAND or !ONEXPAND directives, then
!OFFEXPAND disables expansion of macro calls until the following
!ONEXPAND.
A macro argument's value may contain a macro call. These macro calls
are expanded, unless the argument was declared with the !NOEXPAND
keyword.
The argument to a macro function is a special context that does not
expand macro calls. For example, if !vars is the name of a macro,
then !LENGTH(!vars) expands to 5, as does !LENGTH(!1) if
positional argument 1 has value !vars. To expand macros in these
cases, use the !EVAL macro function,
e.g. !LENGTH(!EVAL(!vars)) or !LENGTH(!EVAL(!1)).
These rules apply to macro calls, not to uses within a macro body of
macro functions, macro arguments, and macro variables created by !DO
or !LET, which are always expanded.
SET MEXPAND may appear within the body of a macro, but it will not
affect expansion of the macro that it appears in. Use !OFFEXPAND
and !ONEXPAND instead.
Macro Functions
Macro bodies may manipulate syntax using macro functions. Macro functions accept tokens as arguments and expand to sequences of characters.
The arguments to macro functions have a restricted form. They may only be a single token (such as an identifier or a string), a macro argument, or a call to a macro function. Thus, the following are valid macro arguments:
x5.0x!1"5 + 6"!CONCAT(x,y)
and the following are not (because they are each multiple tokens):
x y5+6
Macro functions expand to sequences of characters. When these
character strings are processed further as character strings,
e.g. with !LENGTH, any character string is valid. When they are
interpreted as PSPP syntax, e.g. when the expansion becomes part of a
command, they need to be valid for that purpose. For example,
!UNQUOTE("It's") will yield an error if the expansion It's becomes
part of a PSPP command, because it contains unbalanced single quotes,
but !LENGTH(!UNQUOTE("It's")) expands to 4.
The following macro functions are available.
-
!BLANKS(count)
Expands to COUNT unquoted spaces, where COUNT is a nonnegative integer. Outside quotes, any positive number of spaces are equivalent; for a quoted string of spaces, use!QUOTE(!BLANKS(COUNT)).In the examples below,
_stands in for a space to make the results visible.!BLANKS(0) ↦ empty !BLANKS(1) ↦ _ !BLANKS(2) ↦ __ !QUOTE(!BLANKS(5)) ↦ '_____'Call Expansion !BLANKS(0)(empty) !BLANKS(1)_!BLANKS(2)__`!QUOTE(!BLANKS(5)) '_____' -
!CONCAT(arg...)
Expands to the concatenation of all of the arguments. Before concatenation, each quoted string argument is unquoted, as if!UNQUOTEwere applied. This allows for "token pasting", combining two (or more) tokens into a single one:Call Expansion !CONCAT(x, y)xy!CONCAT('x', 'y')xy!CONCAT(12, 34)1234!CONCAT(!NULL, 123)123!CONCATis often used for constructing a series of similar variable names from a prefix followed by a number and perhaps a suffix. For example:Call Expansion !CONCAT(x, 0)x0!CONCAT(x, 0, y)x0yAn identifier token must begin with a letter (or
#or@), which means that attempting to use a number as the first part of an identifier will produce a pair of distinct tokens rather than a single one. For example:Call Expansion !CONCAT(0, x)0 x!CONCAT(0, x, y)0 xy -
!EVAL(arg)
Expands macro calls in ARG. This is especially useful if ARG is the name of a macro or a macro argument that expands to one, because arguments to macro functions are not expanded by default (see Controlling Macro Expansion).The following examples assume that
!varsis a macro that expands toa b c:Call Expansion !varsa b c!QUOTE(!vars)'!vars'!EVAL(!vars)a b c!QUOTE(!EVAL(!vars))'a b c'These examples additionally assume that argument
!1has value!vars:Call Expansion !1a b c!QUOTE(!1)'!vars'!EVAL(!1)a b c!QUOTE(!EVAL(!1))'a b c' -
!HEAD(arg)
!TAIL(arg)
!HEADexpands to just the first token in an unquoted version of ARG, and!TAILto all the tokens after the first.Call Expansion !HEAD('a b c')a!HEAD('a')a!HEAD(!NULL)(empty) !HEAD('')(empty) !TAIL('a b c')b c!TAIL('a')(empty) !TAIL(!NULL)(empty) !TAIL('')(empty) -
!INDEX(haystack, needle)
Looks for NEEDLE in HAYSTACK. If it is present, expands to the 1-based index of its first occurrence; if not, expands to 0.Call Expansion !INDEX(banana, an)2!INDEX(banana, nan)3!INDEX(banana, apple)0!INDEX("banana", nan)4!INDEX("banana", "nan")0!INDEX(!UNQUOTE("banana"), !UNQUOTE("nan"))3 -
!LENGTH(arg)
Expands to a number token representing the number of characters in ARG.Call Expansion !LENGTH(123)3!LENGTH(123.00)6!LENGTH( 123 )3!LENGTH("123")5!LENGTH(xyzzy)5!LENGTH("xyzzy")7!LENGTH("xy""zzy")9!LENGTH(!UNQUOTE("xyzzy"))5!LENGTH(!UNQUOTE("xy""zzy"))6!LENGTH(!1)5(if!1isa b c)!LENGTH(!1)0(if!1is empty)!LENGTH(!NULL)0 -
!NULL
Expands to an empty character sequence.Call Expansion !NULL(empty) !QUOTE(!NULL)'' -
!QUOTE(arg)
!UNQUOTE(arg)
The!QUOTEfunction expands to its argument surrounded by apostrophes, doubling any apostrophes inside the argument to make sure that it is valid PSPP syntax for a string. If the argument was already a quoted string,!QUOTEexpands to it unchanged.Given a quoted string argument, the
!UNQUOTEDfunction expands to the string's contents, with the quotes removed and any doubled quote marks reduced to singletons. If the argument was not a quoted string,!UNQUOTEexpands to the argument unchanged.Call Expansion !QUOTE(123.0)'123.0'!QUOTE( 123 )'123'!QUOTE('a b c')'a b c'!QUOTE("a b c")"a b c"!QUOTE(!1)'a ''b'' c'(if!1isa 'b' c)!UNQUOTE(123.0)123.0!UNQUOTE( 123 )123!UNQUOTE('a b c')a b c!UNQUOTE("a b c")a b c!UNQUOTE(!1)a 'b' c(if!1isa 'b' c)!QUOTE(!UNQUOTE(123.0))'123.0'!QUOTE(!UNQUOTE( 123 ))'123'!QUOTE(!UNQUOTE('a b c'))'a b c'!QUOTE(!UNQUOTE("a b c"))'a b c'!QUOTE(!UNQUOTE(!1))'a ''b'' c'(if!1isa 'b' c) -
!SUBSTR(arg, start[, count])
Expands to a substring of ARG starting from 1-based position START. If COUNT is given, it limits the number of characters in the expansion; if it is omitted, then the expansion extends to the end of ARG.|Call|Expansion| |:-----|:--------| |`!SUBSTR(banana, 3)`|`nana`| |`!SUBSTR(banana, 3, 3)`|`nan`| |`!SUBSTR("banana", 1, 3)`|error (`"ba` is not a valid token)| |`!SUBSTR(!UNQUOTE("banana"), 3)`|`nana`| |`!SUBSTR("banana", 3, 3)`|`ana`| |`!SUBSTR(banana, 3, 0)`|(empty)| |`!SUBSTR(banana, 3, 10)`|`nana`| |`!SUBSTR(banana, 10, 3)`|(empty)| -
!UPCASE(arg)
Expands to an unquoted version of ARG with all letters converted to uppercase.Call Expansion !UPCASE(freckle)FRECKLE!UPCASE('freckle')FRECKLE!UPCASE('a b c')A B C!UPCASE('A B C')A B C
Macro Expressions
Macro expressions are used in conditional expansion and loops, which are described in the following sections. A macro expression may use the following operators, listed in descending order of operator precedence:
-
()
Parentheses override the default operator precedence. -
!EQ !NE !GT !LT !GE !LE = ~= <> > < >= <=
Relational operators compare their operands and yield a Boolean result, either0for false or1for true.These operators always compare their operands as strings. This can be surprising when the strings are numbers because, e.g.,
1 < 1.0and10 < 2both evaluate to1(true).Comparisons are case sensitive, so that
a = Aevaluates to0(false). -
!NOT ~
!AND &
!OR |
Logical operators interpret their operands as Boolean values, where quoted or unquoted0is false and anything else is true, and yield a Boolean result, either0for false or1for true.
Macro expressions do not include any arithmetic operators.
An operand in an expression may be a single token (including a macro
argument name) or a macro function invocation. Either way, the
expression evaluator unquotes the operand, so that 1 = '1' is true.
Macro Conditional Expansion
The !IF construct may be used inside a macro body to allow for
conditional expansion. It takes the following forms:
!IF (EXPRESSION) !THEN TRUE-EXPANSION !IFEND
!IF (EXPRESSION) !THEN TRUE-EXPANSION !ELSE FALSE-EXPANSION !IFEND
When EXPRESSION evaluates to true, the macro processor expands
TRUE-EXPANSION; otherwise, it expands FALSE-EXPANSION, if it is
present. The macro processor considers quoted or unquoted 0 to be
false, and anything else to be true.
Macro Loops
The body of a macro may include two forms of loops: loops over numerical ranges and loops over tokens. Both forms expand a "loop body" multiple times, each time setting a named "loop variable" to a different value. The loop body typically expands the loop variable at least once.
The MITERATE setting limits the number of
iterations in a loop. This is a safety measure to ensure that macro
expansion terminates. PSPP issues a warning when the MITERATE limit is
exceeded.
Loops Over Ranges
!DO !VAR = START !TO END [!BY STEP]
BODY
!DOEND
A loop over a numerical range has the form shown above. START,
END, and STEP (if included) must be expressions with numeric
values. The macro processor accepts both integers and real numbers.
The macro processor expands BODY for each numeric value from START
to END, inclusive.
The default value for STEP is 1. If STEP is positive and FIRST > LAST, or if STEP is negative and FIRST < LAST, then the macro
processor doesn't expand the body at all. STEP may not be zero.
Loops Over Tokens
!DO !VAR !IN (EXPRESSION)
BODY
!DOEND
A loop over tokens takes the form shown above. The macro processor
evaluates EXPRESSION and expands BODY once per token in the
result, substituting the token for !VAR each time it appears.
Macro Variable Assignment
The !LET construct evaluates an expression and assigns the result to a
macro variable. It may create a new macro variable or change the value
of one created by a previous !LET or !DO, but it may not change the
value of a macro argument. !LET has the following form:
!LET !VAR = EXPRESSION
If EXPRESSION is more than one token, it must be enclosed in
parentheses.
Macro Settings
the SET command controls some macro behavior. This
section describes these settings.
Any SET command that changes these settings within a macro body only
takes effect following the macro. This is because PSPP expands a
macro's entire body at once, so that SET inside the body only
executes afterwards.
The MEXPAND setting controls whether
macros will be expanded at all. By default, macro expansion is on.
To avoid expansion of macros called within a macro body, use
!OFFEXPAND and !ONEXPAND.
When MPRINT is turned on, PSPP outputs
an expansion of each macro called. This feature can be useful for
debugging macro definitions. For reading the expanded version, keep
in mind that macro expansion removes comments and standardizes white
space.
MNEST limits the depth of expansion of
macro calls, that is, the nesting level of macro expansion. The
default is 50. This is mainly useful to avoid infinite expansion in
the case of a macro that calls itself.
MITERATE limits the number of
iterations in a !DO construct. The default is 1000.
Additional Notes
Calling Macros from Macros
If the body of macro A includes a call to macro B, the call can use
macro arguments (including !*) and macro variables as part of
arguments to B. For !TOKENS arguments, the argument or variable name
counts as one token regardless of the number that it expands into; for
!CHAREND and !ENCLOSE arguments, the delimiters come only from the
call, not the expansions; and !CMDEND ends at the calling command, not
any end of command within an argument or variable.
Macro functions are not supported as part of the arguments in a macro
call. To get the same effect, use !LET to define a macro variable,
then pass the macro variable to the macro.
When macro A calls macro B, the order of their DEFINE commands
doesn't matter, as long as macro B has been defined when A is called.
Command Terminators
Macros and command terminators require care. Macros honor the syntax
differences between interactive and batch
syntax, which means that the
interpretation of a macro can vary depending on the syntax mode in
use. We assume here that interactive mode is in use, in which . at
the end of a line is the primary way to end a command.
The DEFINE command needs to end with . following the !ENDDEFINE.
The macro body may contain . if it is intended to expand to whole
commands, but using . within a macro body that expands to just
syntax fragments (such as a list of variables) will cause syntax
errors.
Macro directives such as !IF and !DO do not end with ..
Expansion Contexts
PSPP does not expand macros within comments, whether introduced within
a line by /* or as a separate COMMENT or
* command. (SPSS does expand macros in
COMMENT and *.)
Macros do not expand within quoted strings.
Macros are expanded in the TITLE and
SUBTITLE commands as long as their
arguments are not quoted strings.
PRESERVE and RESTORE
Some macro bodies might use the SET command to change
certain settings. When this is the case, consider using the
PRESERVE and RESTORE commands to save and then
restore these settings.
DO IF…END IF
DO IF condition.
...
[ELSE IF condition.
...
]...
[ELSE.
...]
END IF.
DO IF allows one of several sets of transformations to be executed,
depending on user-specified conditions.
If the specified boolean expression evaluates as true, then the block
of code following DO IF is executed. If it evaluates as missing,
then none of the code blocks is executed. If it is false, then the
boolean expression on the first ELSE IF, if present, is tested in
turn, with the same rules applied. If all expressions evaluate to
false, then the ELSE code block is executed, if it is present.
When DO IF or ELSE IF is specified following
TEMPORARY, the
LAG function
may not be used.
DO REPEAT…END REPEAT
DO REPEAT dummy_name=expansion....
...
END REPEAT [PRINT].
expansion takes one of the following forms:
var_list
num_or_range...
'string'...
ALL
num_or_range takes one of the following forms:
number
num1 TO num2
DO REPEAT repeats a block of code, textually substituting different
variables, numbers, or strings into the block with each repetition.
Specify a dummy variable name followed by an equals sign (=) and
the list of replacements. Replacements can be a list of existing or new
variables, numbers, strings, or ALL to specify all existing variables.
When numbers are specified, runs of increasing integers may be indicated
as NUM1 TO NUM2, so that 1 TO 5 is short for 1 2 3 4 5.
Multiple dummy variables can be specified. Each variable must have the same number of replacements.
The code within DO REPEAT is repeated as many times as there are
replacements for each variable. The first time, the first value for
each dummy variable is substituted; the second time, the second value
for each dummy variable is substituted; and so on.
Dummy variable substitutions work like macros. They take place
anywhere in a line that the dummy variable name occurs. This includes
command and subcommand names, so command and subcommand names that
appear in the code block should not be used as dummy variable
identifiers. Dummy variable substitutions do not occur inside quoted
strings, comments, unquoted strings (such as the text on the TITLE
or DOCUMENT command), or inside BEGIN DATA...END DATA.
Substitution occurs only on whole words, so that, for example, a dummy
variable PRINT would not be substituted into the word PRINTOUT.
New variable names used as replacements are not automatically created
as variables, but only if used in the code block in a context that
would create them, e.g. on a NUMERIC or STRING command or on the
left side of a COMPUTE assignment.
Any command may appear within DO REPEAT, including nested DO REPEAT commands. If INCLUDE or INSERT appears within DO REPEAT, the substitutions do not apply to the included file.
If PRINT is specified on END REPEAT, the commands after
substitutions are made should be printed to the listing file, prefixed
by a plus sign (+). This feature is not yet implemented.
LOOP…END LOOP
LOOP [INDEX_VAR=START TO END [BY INCR]] [IF CONDITION].
...
END LOOP [IF CONDITION].
LOOP iterates a group of commands. A number of termination options
are offered.
Specify INDEX_VAR to make that variable count from one value to
another by a particular increment. INDEX_VAR must be a pre-existing
numeric variable. START, END, and INCR are numeric
expressions.
expressions.
During the first iteration, INDEX_VAR is set to the value of
START. During each successive iteration, INDEX_VAR is increased
by the value of INCR. If END > START, then the loop terminates
when INDEX_VAR > END; otherwise it terminates when INDEX_VAR < END. If INCR is not specified then it defaults to +1 or -1 as
appropriate.
If END > START and INCR < 0, or if END < START and INCR > 0,
then the loop is never executed. INDEX_VAR is nevertheless set to
the value of start.
Modifying INDEX_VAR within the loop is allowed, but it has no effect
on the value of INDEX_VAR in the next iteration.
Specify a boolean expression for the condition on LOOP to cause the
loop to be executed only if the condition is true. If the condition
is false or missing before the loop contents are executed the first
time, the loop contents are not executed at all.
If index and condition clauses are both present on LOOP, the index
variable is always set before the condition is evaluated. Thus, a
condition that makes use of the index variable will always see the index
value to be used in the next execution of the body.
Specify a boolean expression for the condition on END LOOP to cause
the loop to terminate if the condition is true after the enclosed code
block is executed. The condition is evaluated at the end of the loop,
not at the beginning, so that the body of a loop with only a condition
on END LOOP will always execute at least once.
If the index clause is not present, then the global
MXLOOPS setting, which defaults to
40, limits the number of iterations.
BREAK also terminates LOOP execution.
Loop index variables are by default reset to system-missing from one
case to another, not left, unless a scratch variable is used as index.
When loops are nested, this is usually undesired behavior, which can
be corrected with LEAVE or by using a scratch
variable as the loop
index.
When LOOP or END LOOP is specified following
TEMPORARY, the
LAG function
may not be used.
Statistics
This chapter documents the statistical procedures that PSPP supports.
#DESCRIPTIVES
DESCRIPTIVES
/VARIABLES=VAR_LIST
/MISSING={VARIABLE,LISTWISE} {INCLUDE,NOINCLUDE}
/FORMAT={LABELS,NOLABELS} {NOINDEX,INDEX} {LINE,SERIAL}
/SAVE
/STATISTICS={ALL,MEAN,SEMEAN,STDDEV,VARIANCE,KURTOSIS,
SKEWNESS,RANGE,MINIMUM,MAXIMUM,SUM,DEFAULT,
SESKEWNESS,SEKURTOSIS}
/SORT={NONE,MEAN,SEMEAN,STDDEV,VARIANCE,KURTOSIS,SKEWNESS,
RANGE,MINIMUM,MAXIMUM,SUM,SESKEWNESS,SEKURTOSIS,NAME}
{A,D}
The DESCRIPTIVES procedure reads the active dataset and outputs
linear descriptive statistics requested by the user. It can also
compute Z-scores.
The VARIABLES subcommand, which is required, specifies the list of
variables to be analyzed. Keyword VARIABLES is optional.
All other subcommands are optional:
The MISSING subcommand determines the handling of missing variables.
If INCLUDE is set, then user-missing values are included in the
calculations. If NOINCLUDE is set, which is the default,
user-missing values are excluded. If VARIABLE is set, then missing
values are excluded on a variable by variable basis; if LISTWISE is
set, then the entire case is excluded whenever any value in that case
has a system-missing or, if INCLUDE is set, user-missing value.
The FORMAT subcommand has no effect. It is accepted for backward
compatibility.
The SAVE subcommand causes DESCRIPTIVES to calculate Z scores for
all the specified variables. The Z scores are saved to new variables.
Variable names are generated by trying first the original variable
name with Z prepended and truncated to a maximum of 8 characters, then
the names ZSC000 through ZSC999, STDZ00 through STDZ09,
ZZZZ00 through ZZZZ09, ZQZQ00 through ZQZQ09, in that order.
Z-score variable names may also be specified explicitly on VARIABLES
in the variable list by enclosing them in parentheses after each
variable. When Z scores are calculated, PSPP ignores
TEMPORARY, treating temporary transformations as
permanent.
The STATISTICS subcommand specifies the statistics to be displayed:
ALL
All of the statistics below.MEAN
Arithmetic mean.SEMEAN
Standard error of the mean.STDDEV
Standard deviation.VARIANCE
Variance.KURTOSIS
Kurtosis and standard error of the kurtosis.SKEWNESS
Skewness and standard error of the skewness.RANGE
Range.MINIMUM
Minimum value.MAXIMUM
Maximum value.SUM
Sum.DEFAULT
Mean, standard deviation of the mean, minimum, maximum.SEKURTOSIS
Standard error of the kurtosis.SESKEWNESS
Standard error of the skewness.
The SORT subcommand specifies how the statistics should be sorted.
Most of the possible values should be self-explanatory. NAME causes
the statistics to be sorted by name. By default, the statistics are
listed in the order that they are specified on the VARIABLES
subcommand. The A and D settings request an ascending or
descending sort order, respectively.
Example
The physiology.sav file contains various physiological data for a
sample of persons. Running the DESCRIPTIVES command on the
variables height and temperature with the default options allows one
to see simple linear statistics for these two variables. In the
example below, these variables are specfied on the VARIABLES
subcommand and the SAVE option has been used, to request that Z
scores be calculated.
After the command completes, this example runs DESCRIPTIVES again,
this time on the zheight and ztemperature variables, which are the two
normalized (Z-score) variables generated by the first DESCRIPTIVES
command.
get file='physiology.sav'.
descriptives
/variables = height temperature
/save.
descriptives
/variables = zheight ztemperature.
In the output below, we can see that there are 40 valid data for each of the variables and no missing values. The mean average of the height and temperature is 16677.12 and 37.02 respectively. The descriptive statistics for temperature seem reasonable. However there is a very high standard deviation for height and a suspiciously low minimum. This is due to a data entry error in the data.
In the second Descriptive Statistics output, one can see that the mean and standard deviation of both Z score variables is 0 and 1 respectively. All Z score statistics should have these properties since they are normalized versions of the original scores.
Mapping of Variables to Z-scores
┌────────────────────────────────────────────┬────────────┐
│ Source │ Target │
├────────────────────────────────────────────┼────────────┤
│Height in millimeters │Zheight │
│Internal body temperature in degrees Celcius│Ztemperature│
└────────────────────────────────────────────┴────────────┘
Descriptive Statistics
┌──────────────────────────────────────────┬──┬───────┬───────┬───────┬───────┐
│ │ N│ Mean │Std Dev│Minimum│Maximum│
├──────────────────────────────────────────┼──┼───────┼───────┼───────┼───────┤
│Height in millimeters │40│1677.12│ 262.87│ 179│ 1903│
│Internal body temperature in degrees │40│ 37.02│ 1.82│ 32.59│ 39.97│
│Celcius │ │ │ │ │ │
│Valid N (listwise) │40│ │ │ │ │
│Missing N (listwise) │ 0│ │ │ │ │
└──────────────────────────────────────────┴──┴───────┴───────┴───────┴───────┘
Descriptive Statistics
┌─────────────────────────────────────────┬──┬─────────┬──────┬───────┬───────┐
│ │ │ │ Std │ │ │
│ │ N│ Mean │ Dev │Minimum│Maximum│
├─────────────────────────────────────────┼──┼─────────┼──────┼───────┼───────┤
│Z─score of Height in millimeters │40│1.93E─015│ 1.00│ ─5.70│ .86│
│Z─score of Internal body temperature in │40│1.37E─015│ 1.00│ ─2.44│ 1.62│
│degrees Celcius │ │ │ │ │ │
│Valid N (listwise) │40│ │ │ │ │
│Missing N (listwise) │ 0│ │ │ │ │
└─────────────────────────────────────────┴──┴─────────┴──────┴───────┴───────┘
FREQUENCIES
FREQUENCIES
/VARIABLES=VAR_LIST
/FORMAT={TABLE,NOTABLE,LIMIT(LIMIT)}
{AVALUE,DVALUE,AFREQ,DFREQ}
/MISSING={EXCLUDE,INCLUDE}
/STATISTICS={DEFAULT,MEAN,SEMEAN,MEDIAN,MODE,STDDEV,VARIANCE,
KURTOSIS,SKEWNESS,RANGE,MINIMUM,MAXIMUM,SUM,
SESKEWNESS,SEKURTOSIS,ALL,NONE}
/NTILES=NTILES
/PERCENTILES=percent...
/HISTOGRAM=[MINIMUM(X_MIN)] [MAXIMUM(X_MAX)]
[{FREQ[(Y_MAX)],PERCENT[(Y_MAX)]}] [{NONORMAL,NORMAL}]
/PIECHART=[MINIMUM(X_MIN)] [MAXIMUM(X_MAX)]
[{FREQ,PERCENT}] [{NOMISSING,MISSING}]
/BARCHART=[MINIMUM(X_MIN)] [MAXIMUM(X_MAX)]
[{FREQ,PERCENT}]
/ORDER={ANALYSIS,VARIABLE}
(These options are not currently implemented.)
/HBAR=...
/GROUPED=...
The FREQUENCIES procedure outputs frequency tables for specified
variables. FREQUENCIES can also calculate and display descriptive
statistics (including median and mode) and percentiles, and various
graphical representations of the frequency distribution.
The VARIABLES subcommand is the only required subcommand. Specify
the variables to be analyzed.
The FORMAT subcommand controls the output format. It has several
possible settings:
-
TABLE, the default, causes a frequency table to be output for every variable specified.NOTABLEprevents them from being output.LIMITwith a numeric argument causes them to be output except when there are more than the specified number of values in the table. -
Normally frequency tables are sorted in ascending order by value. This is
AVALUE.DVALUEtables are sorted in descending order by value.AFREQandDFREQtables are sorted in ascending and descending order, respectively, by frequency count.
The MISSING subcommand controls the handling of user-missing values.
When EXCLUDE, the default, is set, user-missing values are not
included in frequency tables or statistics. When INCLUDE is set,
user-missing are included. System-missing values are never included
in statistics, but are listed in frequency tables.
The available STATISTICS are the same as available in
DESCRIPTIVES, with the addition of MEDIAN, the
data's median value, and MODE, the mode. (If there are multiple
modes, the smallest value is reported.) By default, the mean,
standard deviation of the mean, minimum, and maximum are reported for
each variable.
PERCENTILES causes the specified percentiles to be reported. The
percentiles should be presented at a list of numbers between 0 and 100
inclusive. The NTILES subcommand causes the percentiles to be
reported at the boundaries of the data set divided into the specified
number of ranges. For instance, /NTILES=4 would cause quartiles to
be reported.
The HISTOGRAM subcommand causes the output to include a histogram
for each specified numeric variable. The X axis by default ranges
from the minimum to the maximum value observed in the data, but the
MINIMUM and MAXIMUM keywords can set an explicit range.1
Histograms are not created for string variables.
Specify NORMAL to superimpose a normal curve on the histogram.
The PIECHART subcommand adds a pie chart for each variable to the
data. Each slice represents one value, with the size of the slice
proportional to the value's frequency. By default, all non-missing
values are given slices. The MINIMUM and MAXIMUM keywords can be
used to limit the displayed slices to a given range of values. The
keyword NOMISSING causes missing values to be omitted from the
piechart. This is the default. If instead, MISSING is specified,
then the pie chart includes a single slice representing all system
missing and user-missing cases.
The BARCHART subcommand produces a bar chart for each variable.
The MINIMUM and MAXIMUM keywords can be used to omit categories
whose counts which lie outside the specified limits. The FREQ option
(default) causes the ordinate to display the frequency of each category,
whereas the PERCENT option displays relative percentages.
The FREQ and PERCENT options on HISTOGRAM and PIECHART are
accepted but not currently honoured.
The ORDER subcommand is accepted but ignored.
Example
The syntax below runs a frequency analysis on the sex and occupation
variables from the personnel.sav file. This is useful to get an
general idea of the way in which these nominal variables are
distributed.
get file='personnel.sav'.
frequencies /variables = sex occupation
/statistics = none.
If you are using the graphic user interface, the dialog box is set up such that by default, several statistics are calculated. Some are not particularly useful for categorical variables, so you may want to disable those.
From the output, shown below, it is evident that there are 33 males, 21 females and 2 persons for whom their sex has not been entered.
One can also see how many of each occupation there are in the data. When dealing with string variables used as nominal values, running a frequency analysis is useful to detect data input entries. Notice that one occupation value has been mistyped as "Scrientist". This entry should be corrected, or marked as missing before using the data.
sex
┌──────────────┬─────────┬───────┬─────────────┬──────────────────┐
│ │Frequency│Percent│Valid Percent│Cumulative Percent│
├──────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Valid Male │ 33│ 58.9%│ 61.1%│ 61.1%│
│ Female│ 21│ 37.5%│ 38.9%│ 100.0%│
├──────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Missing . │ 2│ 3.6%│ │ │
├──────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Total │ 56│ 100.0%│ │ │
└──────────────┴─────────┴───────┴─────────────┴──────────────────┘
occupation
┌────────────────────────┬─────────┬───────┬─────────────┬──────────────────┐
│ │Frequency│Percent│Valid Percent│Cumulative Percent│
├────────────────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Valid Artist │ 8│ 14.3%│ 14.3%│ 14.3%│
│ Baker │ 2│ 3.6%│ 3.6%│ 17.9%│
│ Barrister │ 1│ 1.8%│ 1.8%│ 19.6%│
│ Carpenter │ 4│ 7.1%│ 7.1%│ 26.8%│
│ Cleaner │ 4│ 7.1%│ 7.1%│ 33.9%│
│ Cook │ 7│ 12.5%│ 12.5%│ 46.4%│
│ Manager │ 8│ 14.3%│ 14.3%│ 60.7%│
│ Mathematician │ 4│ 7.1%│ 7.1%│ 67.9%│
│ Painter │ 2│ 3.6%│ 3.6%│ 71.4%│
│ Payload Specialist│ 1│ 1.8%│ 1.8%│ 73.2%│
│ Plumber │ 5│ 8.9%│ 8.9%│ 82.1%│
│ Scientist │ 7│ 12.5%│ 12.5%│ 94.6%│
│ Scrientist │ 1│ 1.8%│ 1.8%│ 96.4%│
│ Tailor │ 2│ 3.6%│ 3.6%│ 100.0%│
├────────────────────────┼─────────┼───────┼─────────────┼──────────────────┤
│Total │ 56│ 100.0%│ │ │
└────────────────────────┴─────────┴───────┴─────────────┴──────────────────┘
-
The number of bins is chosen according to the Freedman-Diaconis rule: $$2 \times IQR(x)n^{-1/3}$$ where \(IQR(x)\) is the interquartile range of \(x\) and \(n\) is the number of samples. (
EXAMINEuses a different algorithm to determine bin sizes.) ↩
#EXAMINE
EXAMINE
VARIABLES= VAR1 [VAR2] ... [VARN]
[BY FACTOR1 [BY SUBFACTOR1]
[ FACTOR2 [BY SUBFACTOR2]]
...
[ FACTOR3 [BY SUBFACTOR3]]
]
/STATISTICS={DESCRIPTIVES, EXTREME[(N)], ALL, NONE}
/PLOT={BOXPLOT, NPPLOT, HISTOGRAM, SPREADLEVEL[(T)], ALL, NONE}
/CINTERVAL P
/COMPARE={GROUPS,VARIABLES}
/ID=IDENTITY_VARIABLE
/{TOTAL,NOTOTAL}
/PERCENTILE=[PERCENTILES]={HAVERAGE, WAVERAGE, ROUND, AEMPIRICAL, EMPIRICAL }
/MISSING={LISTWISE, PAIRWISE} [{EXCLUDE, INCLUDE}]
[{NOREPORT,REPORT}]
EXAMINE is used to perform exploratory data analysis. In
particular, it is useful for testing how closely a distribution
follows a normal distribution, and for finding outliers and extreme
values.
The VARIABLES subcommand is mandatory. It specifies the dependent
variables and optionally variables to use as factors for the analysis.
Variables listed before the first BY keyword (if any) are the
dependent variables. The dependent variables may optionally be followed
by a list of factors which tell PSPP how to break down the analysis for
each dependent variable.
Following the dependent variables, factors may be specified. The
factors (if desired) should be preceded by a single BY keyword. The
format for each factor is FACTORVAR [BY SUBFACTORVAR]. Each unique
combination of the values of FACTORVAR and SUBFACTORVAR divide the
dataset into "cells". Statistics are calculated for each cell and for
the entire dataset (unless NOTOTAL is given).
The STATISTICS subcommand specifies which statistics to show.
DESCRIPTIVES produces a table showing some parametric and
non-parametrics statistics. EXTREME produces a table showing the
extremities of each cell. A number in parentheses determines how many
upper and lower extremities to show. The default number is 5.
The subcommands TOTAL and NOTOTAL are mutually exclusive. If
TOTAL appears, then statistics for the entire dataset as well as for
each cell are produced. If NOTOTAL appears, then statistics are
produced only for the cells (unless no factor variables have been
given). These subcommands have no effect if there have been no factor
variables specified.
The PLOT subcommand specifies which plots are to be produced if
any. Available plots are HISTOGRAM, NPPLOT, BOXPLOT and
SPREADLEVEL. The first three can be used to visualise how closely
each cell conforms to a normal distribution, whilst the spread vs. level
plot can be useful to visualise how the variance differs between
factors. Boxplots show you the outliers and extreme values.1
The SPREADLEVEL plot displays the interquartile range versus the
median. It takes an optional parameter T, which specifies how the
data should be transformed prior to plotting. The given value T is
a power to which the data are raised. For example, if T is given as
2, then the square of the data is used. Zero, however is a special
value. If T is 0 or is omitted, then data are transformed by taking
its natural logarithm instead of raising to the power of T.
When one or more plots are requested, EXAMINE also performs the
Shapiro-Wilk test for each category. There are however a number of
provisos:
- All weight values must be integer.
- The cumulative weight value must be in the range [3, 5000].
The COMPARE subcommand is only relevant if producing boxplots, and
it is only useful there is more than one dependent variable and at least
one factor. If /COMPARE=GROUPS is specified, then one plot per
dependent variable is produced, each of which contain boxplots for all
the cells. If /COMPARE=VARIABLES is specified, then one plot per cell
is produced, each containing one boxplot per dependent variable. If the
/COMPARE subcommand is omitted, then PSPP behaves as if
/COMPARE=GROUPS were given.
The ID subcommand is relevant only if /PLOT=BOXPLOT or
/STATISTICS=EXTREME has been given. If given, it should provide the
name of a variable which is to be used to labels extreme values and
outliers. Numeric or string variables are permissible. If the ID
subcommand is not given, then the case number is used for labelling.
The CINTERVAL subcommand specifies the confidence interval to use
in calculation of the descriptives command. The default is 95%.
The PERCENTILES subcommand specifies which percentiles are to be
calculated, and which algorithm to use for calculating them. The
default is to calculate the 5, 10, 25, 50, 75, 90, 95 percentiles using
the HAVERAGE algorithm.
The TOTAL and NOTOTAL subcommands are mutually exclusive. If
NOTOTAL is given and factors have been specified in the VARIABLES
subcommand, then statistics for the unfactored dependent variables are
produced in addition to the factored variables. If there are no factors
specified then TOTAL and NOTOTAL have no effect.
The following example generates descriptive statistics and histograms
for two variables score1 and score2. Two factors are given: gender
and gender BY culture. Therefore, the descriptives and histograms are
generated for each distinct value of gender and for each distinct
combination of the values of gender and race. Since the NOTOTAL
keyword is given, statistics and histograms for score1 and score2
covering the whole dataset are not produced.
EXAMINE score1 score2 BY
gender
gender BY culture
/STATISTICS = DESCRIPTIVES
/PLOT = HISTOGRAM
/NOTOTAL.
Here is a second example showing how EXAMINE may be used to find
extremities.
EXAMINE height weight BY
gender
/STATISTICS = EXTREME (3)
/PLOT = BOXPLOT
/COMPARE = GROUPS
/ID = name.
In this example, we look at the height and weight of a sample of
individuals and how they differ between male and female. A table
showing the 3 largest and the 3 smallest values of height and weight for
each gender, and for the whole dataset as are shown. In addition, the
/PLOT subcommand requests boxplots. Because /COMPARE = GROUPS was
specified, boxplots for male and female are shown in juxtaposed in the
same graphic, allowing us to easily see the difference between the
genders. Since the variable name was specified on the ID subcommand,
values of the name variable are used to label the extreme values.
⚠️ If you specify many dependent variables or factor variables for which there are many distinct values, then
EXAMINEwill produce a very large quantity of output.
-
HISTOGRAMuses Sturges' rule to determine the number of bins, as approximately \(1 + \log2(n)\), where \(n\) is the number of samples. (FREQUENCIESuses a different algorithm to find the bin size.) ↩
GRAPH
GRAPH
/HISTOGRAM [(NORMAL)]= VAR
/SCATTERPLOT [(BIVARIATE)] = VAR1 WITH VAR2 [BY VAR3]
/BAR = {SUMMARY-FUNCTION(VAR1) | COUNT-FUNCTION} BY VAR2 [BY VAR3]
[ /MISSING={LISTWISE, VARIABLE} [{EXCLUDE, INCLUDE}] ]
[{NOREPORT,REPORT}]
GRAPH produces a graphical plots of data. Only one of the
subcommands HISTOGRAM, BAR or SCATTERPLOT can be specified, i.e.
only one plot can be produced per call of GRAPH. The MISSING is
optional.
Scatterplot
The subcommand SCATTERPLOT produces an xy plot of the data. GRAPH
uses VAR3, if specified, to determine the colours and/or
markers for the plot. The following is an example for producing a
scatterplot.
GRAPH
/SCATTERPLOT = height WITH weight BY gender.
This example produces a scatterplot where height is plotted versus
weight. Depending on the value of gender, the colour of the
datapoint is different. With this plot it is possible to analyze
gender differences for height versus weight relation.
Histogram
The subcommand HISTOGRAM produces a histogram. Only one variable is
allowed for the histogram plot. The keyword NORMAL may be specified
in parentheses, to indicate that the ideal normal curve should be
superimposed over the histogram. For an alternative method to produce
histograms, see EXAMINE. The following example produces
a histogram plot for the variable weight.
GRAPH
/HISTOGRAM = weight.
Bar Chart
The subcommand BAR produces a bar chart. This subcommand requires
that a COUNT-FUNCTION be specified (with no arguments) or a
SUMMARY-FUNCTION with a variable VAR1 in parentheses. Following the
summary or count function, the keyword BY should be specified and
then a catagorical variable, VAR2. The values of VAR2 determine
the labels of the bars to be plotted. A second categorical variable
VAR3 may be specified, in which case a clustered (grouped) bar chart
is produced.
Valid count functions are:
COUNT
The weighted counts of the cases in each category.PCT
The weighted counts of the cases in each category expressed as a percentage of the total weights of the cases.CUFREQ
The cumulative weighted counts of the cases in each category.CUPCT
The cumulative weighted counts of the cases in each category expressed as a percentage of the total weights of the cases.
The summary function is applied to VAR1 across all cases in each
category. The recognised summary functions are:
SUM
The sum.MEAN
The arithmetic mean.MAXIMUM
The maximum value.MINIMUM
The minimum value.
The following examples assume a dataset which is the results of a survey. Each respondent has indicated annual income, their sex and city of residence. One could create a bar chart showing how the mean income varies between of residents of different cities, thus:
GRAPH /BAR = MEAN(INCOME) BY CITY.
This can be extended to also indicate how income in each city differs between the sexes.
GRAPH /BAR = MEAN(INCOME) BY CITY BY SEX.
One might also want to see how many respondents there are from each city. This can be achieved as follows:
GRAPH /BAR = COUNT BY CITY.
The FREQUENCIES and CROSSTABS commands can also produce bar charts.
CORRELATIONS
CORRELATIONS
/VARIABLES = VAR_LIST [ WITH VAR_LIST ]
[
.
.
.
/VARIABLES = VAR_LIST [ WITH VAR_LIST ]
/VARIABLES = VAR_LIST [ WITH VAR_LIST ]
]
[ /PRINT={TWOTAIL, ONETAIL} {SIG, NOSIG} ]
[ /STATISTICS=DESCRIPTIVES XPROD ALL]
[ /MISSING={PAIRWISE, LISTWISE} {INCLUDE, EXCLUDE} ]
The CORRELATIONS procedure produces tables of the Pearson
correlation coefficient for a set of variables. The significance of the
coefficients are also given.
At least one VARIABLES subcommand is required. If you specify the
WITH keyword, then a non-square correlation table is produced. The
variables preceding WITH, are used as the rows of the table, and the
variables following WITH are used as the columns of the table. If no
WITH subcommand is specified, then CORRELATIONS produces a square,
symmetrical table using all variables.
The MISSING subcommand determines the handling of missing
variables. If INCLUDE is set, then user-missing values are included
in the calculations, but system-missing values are not. If EXCLUDE is
set, which is the default, user-missing values are excluded as well as
system-missing values.
If LISTWISE is set, then the entire case is excluded from analysis
whenever any variable specified in any /VARIABLES subcommand contains
a missing value. If PAIRWISE is set, then a case is considered
missing only if either of the values for the particular coefficient are
missing. The default is PAIRWISE.
The PRINT subcommand is used to control how the reported
significance values are printed. If the TWOTAIL option is used, then
a two-tailed test of significance is printed. If the ONETAIL option
is given, then a one-tailed test is used. The default is TWOTAIL.
If the NOSIG option is specified, then correlation coefficients
with significance less than 0.05 are highlighted. If SIG is
specified, then no highlighting is performed. This is the default.
The STATISTICS subcommand requests additional statistics to be
displayed. The keyword DESCRIPTIVES requests that the mean, number of
non-missing cases, and the non-biased estimator of the standard
deviation are displayed. These statistics are displayed in a separated
table, for all the variables listed in any /VARIABLES subcommand. The
XPROD keyword requests cross-product deviations and covariance
estimators to be displayed for each pair of variables. The keyword
ALL is the union of DESCRIPTIVES and XPROD.
CROSSTABS
CROSSTABS
/TABLES=VAR_LIST BY VAR_LIST [BY VAR_LIST]...
/MISSING={TABLE,INCLUDE,REPORT}
/FORMAT={TABLES,NOTABLES}
{AVALUE,DVALUE}
/CELLS={COUNT,ROW,COLUMN,TOTAL,EXPECTED,RESIDUAL,SRESIDUAL,
ASRESIDUAL,ALL,NONE}
/COUNT={ASIS,CASE,CELL}
{ROUND,TRUNCATE}
/STATISTICS={CHISQ,PHI,CC,LAMBDA,UC,BTAU,CTAU,RISK,GAMMA,D,
KAPPA,ETA,CORR,ALL,NONE}
/BARCHART
(Integer mode.)
/VARIABLES=VAR_LIST (LOW,HIGH)...
The CROSSTABS procedure displays crosstabulation tables requested
by the user. It can calculate several statistics for each cell in the
crosstabulation tables. In addition, a number of statistics can be
calculated for each table itself.
The TABLES subcommand is used to specify the tables to be reported.
Any number of dimensions is permitted, and any number of variables per
dimension is allowed. The TABLES subcommand may be repeated as many
times as needed. This is the only required subcommand in "general
mode".
Occasionally, one may want to invoke a special mode called "integer
mode". Normally, in general mode, PSPP automatically determines what
values occur in the data. In integer mode, the user specifies the range
of values that the data assumes. To invoke this mode, specify the
VARIABLES subcommand, giving a range of data values in parentheses for
each variable to be used on the TABLES subcommand. Data values inside
the range are truncated to the nearest integer, then assigned to that
value. If values occur outside this range, they are discarded. When it
is present, the VARIABLES subcommand must precede the TABLES
subcommand.
In general mode, numeric and string variables may be specified on
TABLES. In integer mode, only numeric variables are allowed.
The MISSING subcommand determines the handling of user-missing
values. When set to TABLE, the default, missing values are dropped on
a table by table basis. When set to INCLUDE, user-missing values are
included in tables and statistics. When set to REPORT, which is
allowed only in integer mode, user-missing values are included in tables
but marked with a footnote and excluded from statistical calculations.
The FORMAT subcommand controls the characteristics of the
crosstabulation tables to be displayed. It has a number of possible
settings:
-
TABLES, the default, causes crosstabulation tables to be output. -
NOTABLES, which is equivalent toCELLS=NONE, suppresses them. -
AVALUE, the default, causes values to be sorted in ascending order.DVALUEasserts a descending sort order.
The CELLS subcommand controls the contents of each cell in the
displayed crosstabulation table. The possible settings are:
COUNT
Frequency count.ROW
Row percent.COLUMN
Column percent.TOTAL
Table percent.EXPECTED
Expected value.RESIDUAL
Residual.SRESIDUAL
Standardized residual.ASRESIDUAL
Adjusted standardized residual.ALL
All of the above.NONE
Suppress cells entirely.
/CELLS without any settings specified requests COUNT, ROW,
COLUMN, and TOTAL. If CELLS is not specified at all then only
COUNT is selected.
By default, crosstabulation and statistics use raw case weights,
without rounding. Use the /COUNT subcommand to perform rounding:
CASE rounds the weights of individual weights as cases are read,
CELL rounds the weights of cells within each crosstabulation table
after it has been constructed, and ASIS explicitly specifies the
default non-rounding behavior. When rounding is requested, ROUND,
the default, rounds to the nearest integer and TRUNCATE rounds
toward zero.
The STATISTICS subcommand selects statistics for computation:
CHISQ
Pearson chi-square, likelihood ratio, Fisher's exact test, continuity correction, linear-by-linear association.PHI
Phi.CC
Contingency coefficient.LAMBDA
Lambda.UC
Uncertainty coefficient.BTAU
Tau-b.CTAU
Tau-c.RISK
Risk estimate.GAMMA
Gamma.D
Somers' D.KAPPA
Cohen's Kappa.ETA
Eta.CORR
Spearman correlation, Pearson's r.ALL
All of the above.NONE
No statistics.
Selected statistics are only calculated when appropriate for the statistic. Certain statistics require tables of a particular size, and some statistics are calculated only in integer mode.
/STATISTICS without any settings selects CHISQ. If the STATISTICS
subcommand is not given, no statistics are calculated.
The /BARCHART subcommand produces a clustered bar chart for the
first two variables on each table. If a table has more than two
variables, the counts for the third and subsequent levels are aggregated
and the chart is produced as if there were only two variables.
Currently the implementation of
CROSSTABShas the following limitations:
- Significance of some symmetric and directional measures is not calculated.
- Asymptotic standard error is not calculated for Goodman and Kruskal's tau or symmetric Somers' d.
- Approximate T is not calculated for symmetric uncertainty coefficient.
Fixes for any of these deficiencies would be welcomed.
Example
A researcher wishes to know if, in an industry, a person's sex is
related to the person's occupation. To investigate this, she has
determined that the personnel.sav is a representative, randomly
selected sample of persons. The researcher's null hypothesis is that a
person's sex has no relation to a person's occupation. She uses a
chi-squared test of independence to investigate the hypothesis.
get file="personnel.sav".
crosstabs
/tables= occupation by sex
/cells = count expected
/statistics=chisq.
The syntax above conducts a chi-squared test of independence. The
line /tables = occupation by sex indicates that occupation and sex
are the variables to be tabulated.
As shown in the output below, CROSSTABS generates a contingency
table containing the observed count and the expected count of each sex
and each occupation. The expected count is the count which would be
observed if the null hypothesis were true.
The significance of the Pearson Chi-Square value is very much larger than the normally accepted value of 0.05 and so one cannot reject the null hypothesis. Thus the researcher must conclude that a person's sex has no relation to the person's occupation.
Summary
┌────────────────┬───────────────────────────────┐
│ │ Cases │
│ ├──────────┬─────────┬──────────┤
│ │ Valid │ Missing │ Total │
│ ├──┬───────┼─┬───────┼──┬───────┤
│ │ N│Percent│N│Percent│ N│Percent│
├────────────────┼──┼───────┼─┼───────┼──┼───────┤
│occupation × sex│54│ 96.4%│2│ 3.6%│56│ 100.0%│
└────────────────┴──┴───────┴─┴───────┴──┴───────┘
occupation × sex
┌──────────────────────────────────────┬───────────┬─────┐
│ │ sex │ │
│ ├────┬──────┤ │
│ │Male│Female│Total│
├──────────────────────────────────────┼────┼──────┼─────┤
│occupation Artist Count │ 2│ 6│ 8│
│ Expected│4.89│ 3.11│ .15│
│ ────────────────────────────┼────┼──────┼─────┤
│ Baker Count │ 1│ 1│ 2│
│ Expected│1.22│ .78│ .04│
│ ────────────────────────────┼────┼──────┼─────┤
│ Barrister Count │ 0│ 1│ 1│
│ Expected│ .61│ .39│ .02│
│ ────────────────────────────┼────┼──────┼─────┤
│ Carpenter Count │ 3│ 1│ 4│
│ Expected│2.44│ 1.56│ .07│
│ ────────────────────────────┼────┼──────┼─────┤
│ Cleaner Count │ 4│ 0│ 4│
│ Expected│2.44│ 1.56│ .07│
│ ────────────────────────────┼────┼──────┼─────┤
│ Cook Count │ 3│ 2│ 5│
│ Expected│3.06│ 1.94│ .09│
│ ────────────────────────────┼────┼──────┼─────┤
│ Manager Count │ 4│ 4│ 8│
│ Expected│4.89│ 3.11│ .15│
│ ────────────────────────────┼────┼──────┼─────┤
│ Mathematician Count │ 3│ 1│ 4│
│ Expected│2.44│ 1.56│ .07│
│ ────────────────────────────┼────┼──────┼─────┤
│ Painter Count │ 1│ 1│ 2│
│ Expected│1.22│ .78│ .04│
│ ────────────────────────────┼────┼──────┼─────┤
│ Payload Specialist Count │ 1│ 0│ 1│
│ Expected│ .61│ .39│ .02│
│ ────────────────────────────┼────┼──────┼─────┤
│ Plumber Count │ 5│ 0│ 5│
│ Expected│3.06│ 1.94│ .09│
│ ────────────────────────────┼────┼──────┼─────┤
│ Scientist Count │ 5│ 2│ 7│
│ Expected│4.28│ 2.72│ .13│
│ ────────────────────────────┼────┼──────┼─────┤
│ Scrientist Count │ 0│ 1│ 1│
│ Expected│ .61│ .39│ .02│
│ ────────────────────────────┼────┼──────┼─────┤
│ Tailor Count │ 1│ 1│ 2│
│ Expected│1.22│ .78│ .04│
├──────────────────────────────────────┼────┼──────┼─────┤
│Total Count │ 33│ 21│ 54│
│ Expected│ .61│ .39│ 1.00│
└──────────────────────────────────────┴────┴──────┴─────┘
Chi─Square Tests
┌──────────────────┬─────┬──┬──────────────────────────┐
│ │Value│df│Asymptotic Sig. (2─tailed)│
├──────────────────┼─────┼──┼──────────────────────────┤
│Pearson Chi─Square│15.59│13│ .272│
│Likelihood Ratio │19.66│13│ .104│
│N of Valid Cases │ 54│ │ │
└──────────────────┴─────┴──┴──────────────────────────┘
CTABLES
CTABLES has the following overall syntax. At least one TABLE
subcommand is required:
CTABLES
...global subcommands...
[/TABLE axis [BY axis [BY axis]]
...per-table subcommands...]...
where each axis may be empty or take one of the following forms:
variable
variable [{C | S}]
axis + axis
axis > axis
(axis)
axis [summary [string] [format]]
The following subcommands precede the first TABLE subcommand and
apply to all of the output tables. All of these subcommands are
optional:
/FORMAT
[MINCOLWIDTH={DEFAULT | width}]
[MAXCOLWIDTH={DEFAULT | width}]
[UNITS={POINTS | INCHES | CM}]
[EMPTY={ZERO | BLANK | string}]
[MISSING=string]
/VLABELS
VARIABLES=variables
DISPLAY={DEFAULT | NAME | LABEL | BOTH | NONE}
/SMISSING {VARIABLE | LISTWISE}
/PCOMPUTE &postcompute=EXPR(expression)
/PPROPERTIES &postcompute...
[LABEL=string]
[FORMAT=[summary format]...]
[HIDESOURCECATS={NO | YES}
/WEIGHT VARIABLE=variable
/HIDESMALLCOUNTS COUNT=count
The following subcommands follow TABLE and apply only to the
previous TABLE. All of these subcommands are optional:
/SLABELS
[POSITION={COLUMN | ROW | LAYER}]
[VISIBLE={YES | NO}]
/CLABELS {AUTO | {ROWLABELS|COLLABELS}={OPPOSITE|LAYER}}
/CATEGORIES VARIABLES=variables
{[value, value...]
| [ORDER={A | D}]
[KEY={VALUE | LABEL | summary(variable)}]
[MISSING={EXCLUDE | INCLUDE}]}
[TOTAL={NO | YES} [LABEL=string] [POSITION={AFTER | BEFORE}]]
[EMPTY={INCLUDE | EXCLUDE}]
/TITLES
[TITLE=string...]
[CAPTION=string...]
[CORNER=string...]
The CTABLES (aka "custom tables") command produces
multi-dimensional tables from categorical and scale data. It offers
many options for data summarization and formatting.
This section's examples use data from the 2008 (USA) National Survey
of Drinking and Driving Attitudes and Behaviors, a public domain data
set from the (USA) National Highway Traffic Administration and available
at https://data.transportation.gov. PSPP includes this data set, with
a modified dictionary, as examples/nhtsa.sav.
- Basics
- Categorical Variables
- Scalar Variables
- Overriding Measurement Level
- Data Summarization
- Statistics Positions and Labels
- Category Label Positions
- Per-Variable Category Options
- Titles
- Table Formatting
- Display of Variable Labels
- Missing Value Treatment
- Computed Categories
- Effective Weight
- Hiding Small Counts
Basics
The only required subcommand is TABLE, which specifies the variables
to include along each axis:
/TABLE rows [BY columns [BY layers]]
In TABLE, each of ROWS, COLUMNS, and LAYERS is either empty or
an axis expression that specifies one or more variables. At least one
must specify an axis expression.
Categorical Variables
An axis expression that names a categorical variable divides the data
into cells according to the values of that variable. When all the
variables named on TABLE are categorical, by default each cell
displays the number of cases that it contains, so specifying a single
variable yields a frequency table, much like the output of the
FREQUENCIES command:
CTABLES /TABLE=ageGroup.
Custom Tables
┌───────────────────────┬─────┐
│ │Count│
├───────────────────────┼─────┤
│Age group 15 or younger│ 0│
│ 16 to 25 │ 1099│
│ 26 to 35 │ 967│
│ 36 to 45 │ 1037│
│ 46 to 55 │ 1175│
│ 56 to 65 │ 1247│
│ 66 or older │ 1474│
└───────────────────────┴─────┘
Specifying a row and a column categorical variable yields a
crosstabulation, much like the output of the
CROSSTABS command:
CTABLES /TABLE=ageGroup BY gender.
Custom Tables
┌───────────────────────┬────────────┐
│ │S3a. GENDER:│
│ ├─────┬──────┤
│ │ Male│Female│
│ ├─────┼──────┤
│ │Count│ Count│
├───────────────────────┼─────┼──────┤
│Age group 15 or younger│ 0│ 0│
│ 16 to 25 │ 594│ 505│
│ 26 to 35 │ 476│ 491│
│ 36 to 45 │ 489│ 548│
│ 46 to 55 │ 526│ 649│
│ 56 to 65 │ 516│ 731│
│ 66 or older │ 531│ 943│
└───────────────────────┴─────┴──────┘
The > "nesting" operator nests multiple variables on a single axis,
e.g.:
CTABLES /TABLE likelihoodOfBeingStoppedByPolice BY ageGroup > gender.
Custom Tables
┌─────────────────────────────────┬───────────────────────────────────────────┐
│ │ 86. In the past year, have you hosted a │
│ │ social event or party where alcohol was │
│ │ served to adults? │
│ ├─────────────────────┬─────────────────────┤
│ │ Yes │ No │
│ ├─────────────────────┼─────────────────────┤
│ │ Count │ Count │
├─────────────────────────────────┼─────────────────────┼─────────────────────┤
│Age 15 or S3a. Male │ 0│ 0│
│group younger GENDER: Female│ 0│ 0│
│ ───────────────────────────┼─────────────────────┼─────────────────────┤
│ 16 to 25 S3a. Male │ 208│ 386│
│ GENDER: Female│ 202│ 303│
│ ───────────────────────────┼─────────────────────┼─────────────────────┤
│ 26 to 35 S3a. Male │ 225│ 251│
│ GENDER: Female│ 242│ 249│
│ ───────────────────────────┼─────────────────────┼─────────────────────┤
│ 36 to 45 S3a. Male │ 223│ 266│
│ GENDER: Female│ 240│ 307│
│ ───────────────────────────┼─────────────────────┼─────────────────────┤
│ 46 to 55 S3a. Male │ 201│ 325│
│ GENDER: Female│ 282│ 366│
│ ───────────────────────────┼─────────────────────┼─────────────────────┤
│ 56 to 65 S3a. Male │ 196│ 320│
│ GENDER: Female│ 279│ 452│
│ ───────────────────────────┼─────────────────────┼─────────────────────┤
│ 66 or S3a. Male │ 162│ 367│
│ older GENDER: Female│ 243│ 700│
└─────────────────────────────────┴─────────────────────┴─────────────────────┘
The + "stacking" operator allows a single output table to include
multiple data analyses. With +, CTABLES divides the output table
into multiple "sections", each of which includes an analysis of the full
data set. For example, the following command separately tabulates age
group and driving frequency by gender:
CTABLES /TABLE ageGroup + freqOfDriving BY gender.
Custom Tables
┌────────────────────────────────────────────────────────────────┬────────────┐
│ │S3a. GENDER:│
│ ├─────┬──────┤
│ │ Male│Female│
│ ├─────┼──────┤
│ │Count│ Count│
├────────────────────────────────────────────────────────────────┼─────┼──────┤
│Age group 15 or younger │ 0│ 0│
│ 16 to 25 │ 594│ 505│
│ 26 to 35 │ 476│ 491│
│ 36 to 45 │ 489│ 548│
│ 46 to 55 │ 526│ 649│
│ 56 to 65 │ 516│ 731│
│ 66 or older │ 531│ 943│
├────────────────────────────────────────────────────────────────┼─────┼──────┤
│ 1. How often do you usually drive a car or Every day │ 2305│ 2362│
│other motor vehicle? Several days a week│ 440│ 834│
│ Once a week or less│ 125│ 236│
│ Only certain times │ 58│ 72│
│ a year │ │ │
│ Never │ 192│ 348│
└────────────────────────────────────────────────────────────────┴─────┴──────┘
When + and > are used together, > binds more tightly. Use
parentheses to override operator precedence. Thus:
CTABLES /TABLE hasConsideredReduction + hasBeenCriticized > gender.
CTABLES /TABLE (hasConsideredReduction + hasBeenCriticized) > gender.
Custom Tables
┌───────────────────────────────────────────────────────────────────────┬─────┐
│ │Count│
├───────────────────────────────────────────────────────────────────────┼─────┤
│26. During the last 12 months, has there been a Yes │ 513│
│time when you felt you should cut down on your ─────────────────────┼─────┤
│drinking? No │ 3710│
├───────────────────────────────────────────────────────────────────────┼─────┤
│27. During the last 12 months, has there been a Yes S3a. Male │ 135│
│time when people criticized your drinking? GENDER: Female│ 49│
│ ─────────────────────┼─────┤
│ No S3a. Male │ 1916│
│ GENDER: Female│ 2126│
└───────────────────────────────────────────────────────────────────────┴─────┘
Custom Tables
┌───────────────────────────────────────────────────────────────────────┬─────┐
│ │Count│
├───────────────────────────────────────────────────────────────────────┼─────┤
│26. During the last 12 months, has there been a Yes S3a. Male │ 333│
│time when you felt you should cut down on your GENDER: Female│ 180│
│drinking? ─────────────────────┼─────┤
│ No S3a. Male │ 1719│
│ GENDER: Female│ 1991│
├───────────────────────────────────────────────────────────────────────┼─────┤
│27. During the last 12 months, has there been a Yes S3a. Male │ 135│
│time when people criticized your drinking? GENDER: Female│ 49│
│ ─────────────────────┼─────┤
│ No S3a. Male │ 1916│
│ GENDER: Female│ 2126│
└───────────────────────────────────────────────────────────────────────┴─────┘
Scalar Variables
For a categorical variable, CTABLES divides the table into a cell per
category. For a scalar variable, CTABLES instead calculates a summary
measure, by default the mean, of the values that fall into a cell. For
example, if the only variable specified is a scalar variable, then the
output is a single cell that holds the mean of all of the data:
CTABLES /TABLE age.
Custom Tables
┌──────────────────────────┬────┐
│ │Mean│
├──────────────────────────┼────┤
│D1. AGE: What is your age?│ 48│
└──────────────────────────┴────┘
A scalar variable may nest with categorical variables. The following example shows the mean age of survey respondents across gender and language groups:
CTABLES /TABLE gender > age BY region.
Custom Tables
┌─────────────────────────────────────┬───────────────────────────────────────┐
│ │Was this interview conducted in English│
│ │ or Spanish? │
│ ├───────────────────┬───────────────────┤
│ │ English │ Spanish │
│ ├───────────────────┼───────────────────┤
│ │ Mean │ Mean │
├─────────────────────────────────────┼───────────────────┼───────────────────┤
│D1. AGE: What is S3a. Male │ 46│ 37│
│your age? GENDER: Female│ 51│ 39│
└─────────────────────────────────────┴───────────────────┴───────────────────┘
The order of nesting of scalar and categorical variables affects table labeling, but it does not affect the data displayed in the table. The following example shows how the output changes when the nesting order of the scalar and categorical variable are interchanged:
CTABLES /TABLE age > gender BY region.
Custom Tables
┌─────────────────────────────────────┬───────────────────────────────────────┐
│ │Was this interview conducted in English│
│ │ or Spanish? │
│ ├───────────────────┬───────────────────┤
│ │ English │ Spanish │
│ ├───────────────────┼───────────────────┤
│ │ Mean │ Mean │
├─────────────────────────────────────┼───────────────────┼───────────────────┤
│S3a. Male D1. AGE: What is │ 46│ 37│
│GENDER: your age? │ │ │
│ ───────────────────────────┼───────────────────┼───────────────────┤
│ Female D1. AGE: What is │ 51│ 39│
│ your age? │ │ │
└─────────────────────────────────────┴───────────────────┴───────────────────┘
Only a single scalar variable may appear in each section; that is, a
scalar variable may not nest inside a scalar variable directly or
indirectly. Scalar variables may only appear on one axis within
TABLE.
Overriding Measurement Level
By default, CTABLES uses a variable's measurement level to decide
whether to treat it as categorical or scalar. Variables assigned the
nominal or ordinal measurement level are treated as categorical, and
scalar variables are treated as scalar.
When PSPP reads data from a file in an external format, such as a text
file, variables' measurement levels are often unknown. If CTABLES
runs when a variable has an unknown measurement level, it makes an
initial pass through the data to guess measurement
levels. Use the VARIABLE LEVEL command to set or change a variable's
measurement level.
To treat a variable as categorical or scalar only for one use on
CTABLES, add [C] or [S], respectively, after the variable name.
The following example shows the output when variable
monthDaysMin1drink is analyzed as scalar (the default for its
measurement level) and as categorical:
CTABLES
/TABLE monthDaysMin1drink BY gender
/TABLE monthDaysMin1drink [C] BY gender.
Custom Tables
┌────────────────────────────────────────────────────────────────┬────────────┐
│ │S3a. GENDER:│
│ ├────┬───────┤
│ │Male│ Female│
│ ├────┼───────┤
│ │Mean│ Mean │
├────────────────────────────────────────────────────────────────┼────┼───────┤
│20. On how many of the thirty days in this typical month did you│ 7│ 5│
│have one or more alcoholic beverages to drink? │ │ │
└────────────────────────────────────────────────────────────────┴────┴───────┘
Custom Tables
┌────────────────────────────────────────────────────────────────┬────────────┐
│ │S3a. GENDER:│
│ ├─────┬──────┤
│ │ Male│Female│
│ ├─────┼──────┤
│ │Count│ Count│
├────────────────────────────────────────────────────────────────┼─────┼──────┤
│20. On how many of the thirty days in this typical month None │ 152│ 258│
│did you have one or more alcoholic beverages to drink? 1 │ 403│ 653│
│ 2 │ 284│ 324│
│ 3 │ 169│ 215│
│ 4 │ 178│ 143│
│ 5 │ 107│ 106│
│ 6 │ 67│ 59│
│ 7 │ 31│ 11│
│ 8 │ 101│ 74│
│ 9 │ 6│ 4│
│ 10 │ 95│ 75│
│ 11 │ 4│ 0│
│ 12 │ 58│ 33│
│ 13 │ 3│ 2│
│ 14 │ 13│ 3│
│ 15 │ 79│ 58│
│ 16 │ 10│ 6│
│ 17 │ 4│ 2│
│ 18 │ 5│ 4│
│ 19 │ 2│ 0│
│ 20 │ 105│ 47│
│ 21 │ 2│ 0│
│ 22 │ 3│ 3│
│ 23 │ 0│ 3│
│ 24 │ 3│ 0│
│ 25 │ 35│ 25│
│ 26 │ 1│ 1│
│ 27 │ 3│ 3│
│ 28 │ 13│ 8│
│ 29 │ 3│ 3│
│ Every │ 104│ 43│
│ day │ │ │
└────────────────────────────────────────────────────────────────┴─────┴──────┘
Data Summarization
The CTABLES command allows the user to control how the data are
summarized with "summary specifications", syntax that lists one or more
summary function names, optionally separated by commas, and which are
enclosed in square brackets following a variable name on the TABLE
subcommand. When all the variables are categorical, summary
specifications can be given for the innermost nested variables on any
one axis. When a scalar variable is present, only the scalar variable
may have summary specifications.
The following example includes a summary specification for column and row percentages for categorical variables, and mean and median for a scalar variable:
CTABLES
/TABLE=age [MEAN, MEDIAN] BY gender
/TABLE=ageGroup [COLPCT, ROWPCT] BY gender.
Custom Tables
┌──────────────────────────┬───────────────────────┐
│ │ S3a. GENDER: │
│ ├───────────┬───────────┤
│ │ Male │ Female │
│ ├────┬──────┼────┬──────┤
│ │Mean│Median│Mean│Median│
├──────────────────────────┼────┼──────┼────┼──────┤
│D1. AGE: What is your age?│ 46│ 45│ 50│ 52│
└──────────────────────────┴────┴──────┴────┴──────┘
Custom Tables
┌───────────────────────┬─────────────────────────────┐
│ │ S3a. GENDER: │
│ ├──────────────┬──────────────┤
│ │ Male │ Female │
│ ├────────┬─────┼────────┬─────┤
│ │Column %│Row %│Column %│Row %│
├───────────────────────┼────────┼─────┼────────┼─────┤
│Age group 15 or younger│ .0%│ .│ .0%│ .│
│ 16 to 25 │ 19.0%│54.0%│ 13.1%│46.0%│
│ 26 to 35 │ 15.2%│49.2%│ 12.7%│50.8%│
│ 36 to 45 │ 15.6%│47.2%│ 14.2%│52.8%│
│ 46 to 55 │ 16.8%│44.8%│ 16.8%│55.2%│
│ 56 to 65 │ 16.5%│41.4%│ 18.9%│58.6%│
│ 66 or older │ 17.0%│36.0%│ 24.4%│64.0%│
└───────────────────────┴────────┴─────┴────────┴─────┘
A summary specification may override the default label and format by appending a string or format specification or both (in that order) to the summary function name. For example:
CTABLES /TABLE=ageGroup [COLPCT 'Gender %' PCT5.0,
ROWPCT 'Age Group %' PCT5.0]
BY gender.
Custom Tables
┌───────────────────────┬─────────────────────────────────────────┐
│ │ S3a. GENDER: │
│ ├────────────────────┬────────────────────┤
│ │ Male │ Female │
│ ├────────┬───────────┼────────┬───────────┤
│ │Gender %│Age Group %│Gender %│Age Group %│
├───────────────────────┼────────┼───────────┼────────┼───────────┤
│Age group 15 or younger│ 0%│ .│ 0%│ .│
│ 16 to 25 │ 19%│ 54%│ 13%│ 46%│
│ 26 to 35 │ 15%│ 49%│ 13%│ 51%│
│ 36 to 45 │ 16%│ 47%│ 14%│ 53%│
│ 46 to 55 │ 17%│ 45%│ 17%│ 55%│
│ 56 to 65 │ 16%│ 41%│ 19%│ 59%│
│ 66 or older │ 17%│ 36%│ 24%│ 64%│
└───────────────────────┴────────┴───────────┴────────┴───────────┘
In addition to the standard formats, CTABLES allows the user to
specify the following special formats:
| Format | Description | Positive Example | Negative Example |
|---|---|---|---|
NEGPARENw.d | Encloses negative numbers in parentheses. | 42.96 | (42.96) |
NEQUALw.d | Adds a N= prefix. | N=42.96 | N=-42.96 |
PARENw.d | Encloses all numbers in parentheses. | (42.96) | (-42.96) |
PCTPARENw.d | Encloses all numbers in parentheses with a % suffix. | (42.96%) | (-42.96%) |
Parentheses provide a shorthand to apply summary specifications to multiple variables. For example, both of these commands:
CTABLES /TABLE=ageGroup[COLPCT] + membersOver16[COLPCT] BY gender.
CTABLES /TABLE=(ageGroup + membersOver16)[COLPCT] BY gender.
produce the same output shown below:
Custom Tables
┌─────────────────────────────────────────────────────────────┬───────────────┐
│ │ S3a. GENDER: │
│ ├───────┬───────┤
│ │ Male │ Female│
│ ├───────┼───────┤
│ │ Column│ Column│
│ │ % │ % │
├─────────────────────────────────────────────────────────────┼───────┼───────┤
│Age group 15 or │ .0%│ .0%│
│ younger │ │ │
│ 16 to 25 │ 19.0%│ 13.1%│
│ 26 to 35 │ 15.2%│ 12.7%│
│ 36 to 45 │ 15.6%│ 14.2%│
│ 46 to 55 │ 16.8%│ 16.8%│
│ 56 to 65 │ 16.5%│ 18.9%│
│ 66 or older│ 17.0%│ 24.4%│
├─────────────────────────────────────────────────────────────┼───────┼───────┤
│S1. Including yourself, how many members of this None │ .0%│ .0%│
│household are age 16 or older? 1 │ 21.4%│ 35.0%│
│ 2 │ 61.9%│ 52.3%│
│ 3 │ 11.0%│ 8.2%│
│ 4 │ 4.2%│ 3.2%│
│ 5 │ 1.1%│ .9%│
│ 6 or more │ .4%│ .4%│
└─────────────────────────────────────────────────────────────┴───────┴───────┘
The following sections list the available summary functions. After each function's name is given its default label and format. If no format is listed, then the default format is the print format for the variable being summarized.
Summary Functions for Individual Cells
This section lists the summary functions that consider only an
individual cell in CTABLES. Only one such summary function, COUNT,
may be applied to both categorical and scale variables:
-
COUNT("Count", F40.0)
The sum of weights in a cell.If
CATEGORIESfor one or more of the variables in a table include missing values (see Per-Variable Category Options), then some or all of the categories for a cell might be missing values.COUNTcounts data included in a cell regardless of whether its categories are missing.
The following summary functions apply only to scale variables or totals and subtotals for categorical variables. Be cautious about interpreting the summary value in the latter case, because it is not necessarily meaningful; however, the mean of a Likert scale, etc. may have a straightforward interpreation.
-
MAXIMUM("Maximum")
The largest value. -
MEAN("Mean")
The mean. -
MEDIAN("Median")
The median value. -
MINIMUM("Minimum")
The smallest value. -
MISSING("Missing")
Sum of weights of user- and system-missing values. -
MODE("Mode")
The highest-frequency value. Ties are broken by taking the smallest mode. -
PTILEn ("Percentile n")
The Nth percentile, where 0 ≤ N ≤ 100. -
RANGE("Range")
The maximum minus the minimum. -
SEMEAN("Std Error of Mean")
The standard error of the mean. -
STDDEV("Std Deviation")
The standard deviation. -
SUM("Sum")
The sum. -
TOTALN("Total N", F40.0)
The sum of weights in a cell.For scale data,
COUNTandTOTALNare the same.For categorical data,
TOTALNcounts missing values in excluded categories, that is, user-missing values not in an explicit category list onCATEGORIES(see Per-Variable Category Options), or user-missing values excluded becauseMISSING=EXCLUDEis in effect onCATEGORIES, or system-missing values.COUNTdoes not count these.See Missing Values for Summary Variables, for details of how
CTABLESsummarizes missing values. -
VALIDN("Valid N", F40.0)
The sum of valid count weights in included categories.For categorical variables,
VALIDNdoes not count missing values regardless of whether they are in included categories viaCATEGORIES.VALIDNdoes not count valid values that are in excluded categories. See Missing Values for Summary Variables for details. -
VARIANCE("Variance")
The variance.
Summary Functions for Groups of Cells
These summary functions summarize over multiple cells within an area of the output chosen by the user and specified as part of the function name. The following basic AREAs are supported, in decreasing order of size:
-
TABLE
A "section". Stacked variables divide sections of the output from each other. sections may span multiple layers. -
LAYER
A section within a single layer. -
SUBTABLE
A "subtable", whose contents are the cells that pair an innermost row variable and an innermost column variable within a single layer.
The following shows how the output for the table expression
hasBeenPassengerOfDesignatedDriver > hasBeenPassengerOfDrunkDriver BY isLicensedDriver > hasHostedEventWithAlcohol + hasBeenDesignatedDriver BY gender1 is divided up into TABLE, LAYER, and SUBTABLE
areas. Each unique value for Table ID is one section, and similarly
for Layer ID and Subtable ID. Thus, this output has two TABLE areas
(one for isLicensedDriver and one for hasBeenDesignatedDriver),
four LAYER areas (for those two variables, per layer), and 12
SUBTABLE areas.
Custom Tables
Male
┌─────────────────────────────────┬─────────────────┬──────┐
│ │ licensed │desDrv│
│ ├────────┬────────┼───┬──┤
│ │ Yes │ No │ │ │
│ ├────────┼────────┤ │ │
│ │ hostAlc│ hostAlc│ │ │
│ ├────┬───┼────┬───┤ │ │
│ │ Yes│ No│ Yes│ No│Yes│No│
├─────────────────────────────────┼────┼───┼────┼───┼───┼──┤
│desPas Yes druPas Yes Table ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Layer ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Subtable ID│ 1│ 1│ 2│ 2│ 3│ 3│
│ ────────────────┼────┼───┼────┼───┼───┼──┤
│ No Table ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Layer ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Subtable ID│ 1│ 1│ 2│ 2│ 3│ 3│
│ ───────────────────────────┼────┼───┼────┼───┼───┼──┤
│ No druPas Yes Table ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Layer ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Subtable ID│ 4│ 4│ 5│ 5│ 6│ 6│
│ ────────────────┼────┼───┼────┼───┼───┼──┤
│ No Table ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Layer ID │ 1│ 1│ 1│ 1│ 2│ 2│
│ Subtable ID│ 4│ 4│ 5│ 5│ 6│ 6│
└─────────────────────────────────┴────┴───┴────┴───┴───┴──┘
CTABLES also supports the following AREAs that further divide a
subtable or a layer within a section:
-
LAYERROW
LAYERCOL
A row or column, respectively, in one layer of a section. -
ROW
COL
A row or column, respectively, in a subtable.
The following summary functions for groups of cells are available for each AREA described above, for both categorical and scale variables:
-
areaPCTorareaPCT.COUNT("Area %", PCT40.1)
A percentage of total counts within AREA. -
areaPCT.VALIDN("Area Valid N %", PCT40.1)
A percentage of total counts for valid values within AREA. -
areaPCT.TOTALN("Area Total N %", PCT40.1)
A percentage of total counts for all values within AREA.
Scale variables and totals and subtotals for categorical variables may use the following additional group cell summary function:
areaPCT.SUM("Area Sum %", PCT40.1)
Percentage of the sum of the values within AREA.
Summary Functions for Adjusted Weights
If the WEIGHT subcommand specified an effective weight
variable, then the following summary functions use
its value instead of the dictionary weight variable. Otherwise, they
are equivalent to the summary function without the E-prefix:
-
ECOUNT("Adjusted Count", F40.0) -
ETOTALN("Adjusted Total N", F40.0) -
EVALIDN("Adjusted Valid N", F40.0)
Unweighted Summary Functions
The following summary functions with a U-prefix are equivalent to the
same ones without the prefix, except that they use unweighted counts:
-
UCOUNT("Unweighted Count", F40.0) -
UareaPCTorUareaPCT.COUNT("Unweighted Area %", PCT40.1) -
UareaPCT.VALIDN("Unweighted Area Valid N %", PCT40.1) -
UareaPCT.TOTALN("Unweighted Area Total N %", PCT40.1) -
UMEAN("Unweighted Mean") -
UMEDIAN("Unweighted Median") -
UMISSING("Unweighted Missing") -
UMODE("Unweighted Mode") -
UareaPCT.SUM("Unweighted Area Sum %", PCT40.1) -
UPTILEn ("Unweighted Percentile n") -
USEMEAN("Unweighted Std Error of Mean") -
USTDDEV("Unweighted Std Deviation") -
USUM("Unweighted Sum") -
UTOTALN("Unweighted Total N", F40.0) -
UVALIDN("Unweighted Valid N", F40.0) -
UVARIANCE("Unweighted Variance", F40.0)
Statistics Positions and Labels
/SLABELS
[POSITION={COLUMN | ROW | LAYER}]
[VISIBLE={YES | NO}]
The SLABELS subcommand controls the position and visibility of
summary statistics for the TABLE subcommand that it follows.
POSITION sets the axis on which summary statistics appear. With
POSITION=COLUMN, which is the default, each summary statistic appears in
a column. For example:
CTABLES /TABLE=age [MEAN, MEDIAN] BY gender.
Custom Tables
+──────────────────────────+───────────────────────+
│ │ S3a. GENDER: │
│ +───────────+───────────+
│ │ Male │ Female │
│ +────+──────+────+──────+
│ │Mean│Median│Mean│Median│
+──────────────────────────+────+──────+────+──────+
│D1. AGE: What is your age?│ 46│ 45│ 50│ 52│
+──────────────────────────+────+──────+────+──────+
With POSITION=ROW, each summary statistic appears in a row, as shown
below:
CTABLES /TABLE=age [MEAN, MEDIAN] BY gender /SLABELS POSITION=ROW.
Custom Tables
+─────────────────────────────────+─────────────+
│ │ S3a. GENDER:│
│ +─────+───────+
│ │ Male│ Female│
+─────────────────────────────────+─────+───────+
│D1. AGE: What is your age? Mean │ 46│ 50│
│ Median│ 45│ 52│
+─────────────────────────────────+─────+───────+
POSITION=LAYER is also available to place each summary statistic in a
separate layer.
Labels for summary statistics are shown by default. Use VISIBLE=NO to suppress them. Because unlabeled data can cause confusion, it should only be considered if the meaning of the data is evident, as in a simple case like this:
CTABLES /TABLE=ageGroup [TABLEPCT] /SLABELS VISIBLE=NO.
Custom Tables
+───────────────────────+─────+
│Age group 15 or younger│ .0%│
│ 16 to 25 │15.7%│
│ 26 to 35 │13.8%│
│ 36 to 45 │14.8%│
│ 46 to 55 │16.8%│
│ 56 to 65 │17.8%│
│ 66 or older │21.1%│
+───────────────────────+─────+
Category Label Positions
/CLABELS {AUTO │ {ROWLABELS│COLLABELS}={OPPOSITE│LAYER}}
The CLABELS subcommand controls the position of category labels for
the TABLE subcommand that it follows. By default, or if AUTO is
specified, category labels for a given variable nest inside the
variable's label on the same axis. For example, the command below
results in age categories nesting within the age group variable on the
rows axis and gender categories within the gender variable on the
columns axis:
CTABLES /TABLE ageGroup BY gender.
Custom Tables
+───────────────────────+────────────+
│ │S3a. GENDER:│
│ +─────+──────+
│ │ Male│Female│
│ +─────+──────+
│ │Count│ Count│
+───────────────────────+─────+──────+
│Age group 15 or younger│ 0│ 0│
│ 16 to 25 │ 594│ 505│
│ 26 to 35 │ 476│ 491│
│ 36 to 45 │ 489│ 548│
│ 46 to 55 │ 526│ 649│
│ 56 to 65 │ 516│ 731│
│ 66 or older │ 531│ 943│
+───────────────────────+─────+──────+
ROWLABELS=OPPOSITE or COLLABELS=OPPOSITE move row or column variable category labels, respectively, to the opposite axis. The setting affects only the innermost variable or variables, which must be categorical, on the given axis. For example:
CTABLES /TABLE ageGroup BY gender /CLABELS ROWLABELS=OPPOSITE.
CTABLES /TABLE ageGroup BY gender /CLABELS COLLABELS=OPPOSITE.
Custom Tables
+─────+──────────────────────────────────────────────────────────────────────
│ │ S3a. GENDER:
│ +───────────────────────────────────────────+──────────────────────────
│ │ Male │ Female
│ +───────+─────+─────+─────+─────+─────+─────+───────+─────+─────+─────+
│ │ 15 or │16 to│26 to│36 to│46 to│56 to│66 or│ 15 or │16 to│26 to│36 to│
│ │younger│ 25 │ 35 │ 45 │ 55 │ 65 │older│younger│ 25 │ 35 │ 45 │
│ +───────+─────+─────+─────+─────+─────+─────+───────+─────+─────+─────+
│ │ Count │Count│Count│Count│Count│Count│Count│ Count │Count│Count│Count│
+─────+───────+─────+─────+─────+─────+─────+─────+───────+─────+─────+─────+
│Age │ 0│ 594│ 476│ 489│ 526│ 516│ 531│ 0│ 505│ 491│ 548│
│group│ │ │ │ │ │ │ │ │ │ │ │
+─────+───────+─────+─────+─────+─────+─────+─────+───────+─────+─────+─────+
+─────+─────────────────+
│ │ │
│ +─────────────────+
│ │ │
│ +─────+─────+─────+
│ │46 to│56 to│66 or│
│ │ 55 │ 65 │older│
│ +─────+─────+─────+
│ │Count│Count│Count│
+─────+─────+─────+─────+
│Age │ 649│ 731│ 943│
│group│ │ │ │
+─────+─────+─────+─────+
Custom Tables
+──────────────────────────────+────────────+
│ │S3a. GENDER:│
│ +────────────+
│ │ Count │
+──────────────────────────────+────────────+
│Age group 15 or younger Male │ 0│
│ Female│ 0│
│ ─────────────────────+────────────+
│ 16 to 25 Male │ 594│
│ Female│ 505│
│ ─────────────────────+────────────+
│ 26 to 35 Male │ 476│
│ Female│ 491│
│ ─────────────────────+────────────+
│ 36 to 45 Male │ 489│
│ Female│ 548│
│ ─────────────────────+────────────+
│ 46 to 55 Male │ 526│
│ Female│ 649│
│ ─────────────────────+────────────+
│ 56 to 65 Male │ 516│
│ Female│ 731│
│ ─────────────────────+────────────+
│ 66 or older Male │ 531│
│ Female│ 943│
+──────────────────────────────+────────────+
ROWLABELS=LAYER or COLLABELS=LAYER move the innermost row or column
variable category labels, respectively, to the layer axis.
Only one axis's labels may be moved, whether to the opposite axis or to the layer axis.
Effect on Summary Statistics
CLABELS primarily affects the appearance of tables, not the data
displayed in them. However, CTABLES can affect the values displayed
for statistics that summarize areas of a table, since it can change the
definitions of these areas.
For example, consider the following syntax and output:
CTABLES /TABLE ageGroup BY gender [ROWPCT, COLPCT].
Custom Tables
+───────────────────────+─────────────────────────────+
│ │ S3a. GENDER: │
│ +──────────────+──────────────+
│ │ Male │ Female │
│ +─────+────────+─────+────────+
│ │Row %│Column %│Row %│Column %│
+───────────────────────+─────+────────+─────+────────+
│Age group 15 or younger│ .│ .0%│ .│ .0%│
│ 16 to 25 │54.0%│ 19.0%│46.0%│ 13.1%│
│ 26 to 35 │49.2%│ 15.2%│50.8%│ 12.7%│
│ 36 to 45 │47.2%│ 15.6%│52.8%│ 14.2%│
│ 46 to 55 │44.8%│ 16.8%│55.2%│ 16.8%│
│ 56 to 65 │41.4%│ 16.5%│58.6%│ 18.9%│
│ 66 or older │36.0%│ 17.0%│64.0%│ 24.4%│
+───────────────────────+─────+────────+─────+────────+
Using COLLABELS=OPPOSITE changes the definitions of rows and columns,
so that column percentages display what were previously row percentages
and the new row percentages become meaningless (because there is only
one cell per row):
CTABLES
/TABLE ageGroup BY gender [ROWPCT, COLPCT]
/CLABELS COLLABELS=OPPOSITE.
Custom Tables
+──────────────────────────────+───────────────+
│ │ S3a. GENDER: │
│ +──────+────────+
│ │ Row %│Column %│
+──────────────────────────────+──────+────────+
│Age group 15 or younger Male │ .│ .│
│ Female│ .│ .│
│ ─────────────────────+──────+────────+
│ 16 to 25 Male │100.0%│ 54.0%│
│ Female│100.0%│ 46.0%│
│ ─────────────────────+──────+────────+
│ 26 to 35 Male │100.0%│ 49.2%│
│ Female│100.0%│ 50.8%│
│ ─────────────────────+──────+────────+
│ 36 to 45 Male │100.0%│ 47.2%│
│ Female│100.0%│ 52.8%│
│ ─────────────────────+──────+────────+
│ 46 to 55 Male │100.0%│ 44.8%│
│ Female│100.0%│ 55.2%│
│ ─────────────────────+──────+────────+
│ 56 to 65 Male │100.0%│ 41.4%│
│ Female│100.0%│ 58.6%│
│ ─────────────────────+──────+────────+
│ 66 or older Male │100.0%│ 36.0%│
│ Female│100.0%│ 64.0%│
+──────────────────────────────+──────+────────+
Moving Categories for Stacked Variables
If CLABELS moves category labels from an axis with stacked
variables, the variables that are moved must have the same category
specifications (see Per-Variable Category
Options) and the same value labels.
The following shows both moving stacked category variables and adapting to the changing definitions of rows and columns:
CTABLES /TABLE (likelihoodOfBeingStoppedByPolice
+ likelihoodOfHavingAnAccident) [COLPCT].
CTABLES /TABLE (likelihoodOfBeingStoppedByPolice
+ likelihoodOfHavingAnAccident) [ROWPCT]
/CLABELS ROW=OPPOSITE.
Custom Tables
+─────────────────────────────────────────────────────────────────────+───────+
│ │ Column│
│ │ % │
+─────────────────────────────────────────────────────────────────────+───────+
│105b. How likely is it that drivers who have had too Almost │ 10.2%│
│much to drink to drive safely will A. Get stopped by the certain │ │
│police? Very likely │ 21.8%│
│ Somewhat │ 40.2%│
│ likely │ │
│ Somewhat │ 19.0%│
│ unlikely │ │
│ Very │ 8.9%│
│ unlikely │ │
+─────────────────────────────────────────────────────────────────────+───────+
│105b. How likely is it that drivers who have had too Almost │ 15.9%│
│much to drink to drive safely will B. Have an accident? certain │ │
│ Very likely │ 40.8%│
│ Somewhat │ 35.0%│
│ likely │ │
│ Somewhat │ 6.2%│
│ unlikely │ │
│ Very │ 2.0%│
│ unlikely │ │
+─────────────────────────────────────────────────────────────────────+───────+
Custom Tables
+─────────────────────────────+────────+───────+─────────+──────────+─────────+
│ │ Almost │ Very │ Somewhat│ Somewhat │ Very │
│ │ certain│ likely│ likely │ unlikely │ unlikely│
│ +────────+───────+─────────+──────────+─────────+
│ │ Row % │ Row % │ Row % │ Row % │ Row % │
+─────────────────────────────+────────+───────+─────────+──────────+─────────+
│105b. How likely is it that │ 10.2%│ 21.8%│ 40.2%│ 19.0%│ 8.9%│
│drivers who have had too much│ │ │ │ │ │
│to drink to drive safely will│ │ │ │ │ │
│A. Get stopped by the police?│ │ │ │ │ │
│105b. How likely is it that │ 15.9%│ 40.8%│ 35.0%│ 6.2%│ 2.0%│
│drivers who have had too much│ │ │ │ │ │
│to drink to drive safely will│ │ │ │ │ │
│B. Have an accident? │ │ │ │ │ │
+─────────────────────────────+────────+───────+─────────+──────────+─────────+
Per-Variable Category Options
/CATEGORIES VARIABLES=variables
{[value, value...]
| [ORDER={A | D}]
[KEY={VALUE | LABEL | summary(variable)}]
[MISSING={EXCLUDE | INCLUDE}]}
[TOTAL={NO | YES} [LABEL=string] [POSITION={AFTER | BEFORE}]]
[EMPTY={INCLUDE | EXCLUDE}]
The CATEGORIES subcommand specifies, for one or more categorical
variables, the categories to include and exclude, the sort order for
included categories, and treatment of missing values. It also controls
the totals and subtotals to display. It may be specified any number of
times, each time for a different set of variables. CATEGORIES applies
to the table produced by the TABLE subcommand that it follows.
CATEGORIES does not apply to scalar variables.
VARIABLES is required and must list the variables for the subcommand to affect.
The syntax may specify the categories to include and their sort order
either explicitly or implicitly. The following sections give the
details of each form of syntax, followed by information on totals and
subtotals and the EMPTY setting.
Explicit Categories
To use CTABLES to explicitly specify categories to include, list the
categories within square brackets in the desired sort order. Use spaces
or commas to separate values. Categories not covered by the list are
excluded from analysis.
Each element of the list takes one of the following forms:
-
number
'string'
A numeric or string category value, for variables that have the corresponding type. -
'date'
'time'
A date or time category value, for variables that have a date or time print format. -
min THRU max
LO THRU max
min THRU HI
A range of category values, whereminandmaxeach takes one of the forms above, in increasing order. -
MISSING
All user-missing values. (To match individual user-missing values, specify their category values.) -
OTHERNM
Any non-missing value not covered by any other element of the list (regardless of whereOTHERNMis placed in the list). -
&postcompute
A computed category name. -
SUBTOTAL
HSUBTOTAL
A subtotal.
If multiple elements of the list cover a given category, the last one in the list takes precedence.
The following example syntax and output show how an explicit category can limit the displayed categories:
CTABLES /TABLE freqOfDriving.
CTABLES /TABLE freqOfDriving /CATEGORIES VARIABLES=freqOfDriving [1, 2, 3].
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│ 1. How often do you usually drive a car or other Every day │ 4667│
│motor vehicle? Several days a week │ 1274│
│ Once a week or less │ 361│
│ Only certain times a│ 130│
│ year │ │
│ Never │ 540│
+───────────────────────────────────────────────────────────────────────+─────+
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│ 1. How often do you usually drive a car or other Every day │ 4667│
│motor vehicle? Several days a │ 1274│
│ week │ │
│ Once a week or │ 361│
│ less │ │
+───────────────────────────────────────────────────────────────────────+─────+
Implicit Categories
In the absence of an explicit list of categories, CATEGORIES allows
KEY, ORDER, and MISSING to specify how to select and sort
categories.
The KEY setting specifies the sort key. By default, or with
KEY=VALUE, categories are sorted by default. Categories may also be
sorted by value label, with KEY=LABEL, or by the value of a summary
function, e.g. KEY=COUNT.
By default, or with ORDER=A, categories are sorted in ascending
order. Specify ORDER=D to sort in descending order.
User-missing values are excluded by default, or with
MISSING=EXCLUDE. Specify MISSING=INCLUDE to include user-missing
values. The system-missing value is always excluded.
The following example syntax and output show how MISSING=INCLUDE
causes missing values to be included in a category list.
CTABLES /TABLE freqOfDriving.
CTABLES /TABLE freqOfDriving
/CATEGORIES VARIABLES=freqOfDriving MISSING=INCLUDE.
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│ 1. How often do you usually drive a car or other Every day │ 4667│
│motor vehicle? Several days a week │ 1274│
│ Once a week or less │ 361│
│ Only certain times a│ 130│
│ year │ │
│ Never │ 540│
+───────────────────────────────────────────────────────────────────────+─────+
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│ 1. How often do you usually drive a car or other Every day │ 4667│
│motor vehicle? Several days a week │ 1274│
│ Once a week or less │ 361│
│ Only certain times a│ 130│
│ year │ │
│ Never │ 540│
│ Don't know │ 8│
│ Refused │ 19│
+───────────────────────────────────────────────────────────────────────+─────+
Totals and Subtotals
CATEGORIES also controls display of totals and subtotals. By default,
or with TOTAL=NO, totals are not displayed. Use TOTAL=YES to
display a total. By default, the total is labeled "Total"; use
LABEL="label" to override it.
Subtotals are also not displayed by default. To add one or more
subtotals, use an explicit category list and insert SUBTOTAL or
HSUBTOTAL in the position or positions where the subtotal should
appear. The subtotal becomes an extra row or column or layer.
HSUBTOTAL additionally hides the categories that make up the
subtotal. Either way, the default label is "Subtotal", use
SUBTOTAL="label" or HSUBTOTAL="label" to specify a custom label.
The following example syntax and output show how to use TOTAL=YES
and SUBTOTAL:
CTABLES
/TABLE freqOfDriving
/CATEGORIES VARIABLES=freqOfDriving [OTHERNM, SUBTOTAL='Valid Total',
MISSING, SUBTOTAL='Missing Total']
TOTAL=YES LABEL='Overall Total'.
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│ 1. How often do you usually drive a car or other Every day │ 4667│
│motor vehicle? Several days a week │ 1274│
│ Once a week or less │ 361│
│ Only certain times a│ 130│
│ year │ │
│ Never │ 540│
│ Valid Total │ 6972│
│ Don't know │ 8│
│ Refused │ 19│
│ Missing Total │ 27│
│ Overall Total │ 6999│
+───────────────────────────────────────────────────────────────────────+─────+
By default, or with POSITION=AFTER, totals are displayed in the
output after the last category and subtotals apply to categories that
precede them. With POSITION=BEFORE, totals come before the first
category and subtotals apply to categories that follow them.
Only categorical variables may have totals and subtotals. Scalar
variables may be "totaled" indirectly by enabling totals and subtotals
on a categorical variable within which the scalar variable is
summarized. For example, the following syntax produces a mean, count,
and valid count across all data by adding a total on the categorical
region variable, as shown:
CTABLES /TABLE=region > monthDaysMin1drink [MEAN, VALIDN]
/CATEGORIES VARIABLES=region TOTAL=YES LABEL='All regions'.
Custom Tables
+───────────────────────────────────────────────────────────+────+─────+──────+
│ │ │ │ Valid│
│ │Mean│Count│ N │
+───────────────────────────────────────────────────────────+────+─────+──────+
│20. On how many of the thirty days in this Region NE │ 5.6│ 1409│ 945│
│typical month did you have one or more MW │ 5.0│ 1654│ 1026│
│alcoholic beverages to drink? S │ 6.0│ 2390│ 1285│
│ W │ 6.5│ 1546│ 953│
│ All │ 5.8│ 6999│ 4209│
│ regions │ │ │ │
+───────────────────────────────────────────────────────────+────+─────+──────+
By default, PSPP uses the same summary functions for totals and
subtotals as other categories. To summarize totals and subtotals
differently, specify the summary functions for totals and subtotals
after the ordinary summary functions inside a nested set of []
following TOTALS. For example, the following syntax displays COUNT
for individual categories and totals and VALIDN for totals, as shown:
CTABLES
/TABLE isLicensedDriver [COUNT, TOTALS[COUNT, VALIDN]]
/CATEGORIES VARIABLES=isLicensedDriver TOTAL=YES MISSING=INCLUDE.
Custom Tables
+────────────────────────────────────────────────────────────────+─────+──────+
│ │ │ Valid│
│ │Count│ N │
+────────────────────────────────────────────────────────────────+─────+──────+
│D7a. Are you a licensed driver; that is, do you have a Yes │ 6379│ │
│valid driver's license? No │ 572│ │
│ Don't │ 4│ │
│ know │ │ │
│ Refused │ 44│ │
│ Total │ 6999│ 6951│
+────────────────────────────────────────────────────────────────+─────+──────+
Categories Without Values
Some categories might not be included in the data set being analyzed.
For example, our example data set has no cases in the "15 or younger"
age group. By default, or with EMPTY=INCLUDE, PSPP includes these
empty categories in output tables. To exclude them, specify
EMPTY=EXCLUDE.
For implicit categories, empty categories potentially include all the
values with value labels for a given variable; for explicit categories,
they include all the values listed individually and all values with
value labels that are covered by ranges or MISSING or OTHERNM.
The following example syntax and output show the effect of
EMPTY=EXCLUDE for the membersOver16 variable, in which 0 is labeled
"None" but no cases exist with that value:
CTABLES /TABLE=membersOver16.
CTABLES /TABLE=membersOver16 /CATEGORIES VARIABLES=membersOver16 EMPTY=EXCLUDE.
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│S1. Including yourself, how many members of this household are None │ 0│
│age 16 or older? 1 │ 1586│
│ 2 │ 3031│
│ 3 │ 505│
│ 4 │ 194│
│ 5 │ 55│
│ 6 or │ 21│
│ more │ │
+───────────────────────────────────────────────────────────────────────+─────+
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│S1. Including yourself, how many members of this household are 1 │ 1586│
│age 16 or older? 2 │ 3031│
│ 3 │ 505│
│ 4 │ 194│
│ 5 │ 55│
│ 6 or │ 21│
│ more │ │
+───────────────────────────────────────────────────────────────────────+─────+
Titles
/TITLES
[TITLE=string...]
[CAPTION=string...]
[CORNER=string...]
The TITLES subcommand sets the title, caption, and corner text for
the table output for the previous TABLE subcommand. Any number of
strings may be specified for each kind of text, with each string
appearing on a separate line in the output. The title appears above the
table, the caption below the table, and the corner text appears in the
table's upper left corner. By default, the title is "Custom Tables" and
the caption and corner text are empty. With some table output styles,
the corner text is not displayed.
The strings provided in this subcommand may contain the following macro-like keywords that PSPP substitutes at the time that it runs the command:
-
)DATE
The current date, e.g. MM/DD/YY. The format is locale-dependent. -
)TIME
The current time, e.g. HH:MM:SS. The format is locale-dependent. -
)TABLE
The expression specified on theTABLEcommand. Summary and measurement level specifications are omitted, and variable labels are used in place of variable names.
Table Formatting
/FORMAT
[MINCOLWIDTH={DEFAULT | width}]
[MAXCOLWIDTH={DEFAULT | width}]
[UNITS={POINTS | INCHES | CM}]
[EMPTY={ZERO | BLANK | string}]
[MISSING=string]
The FORMAT subcommand, which must precede the first TABLE
subcommand, controls formatting for all the output tables. FORMAT and
all of its settings are optional.
Use MINCOLWIDTH and MAXCOLWIDTH to control the minimum or maximum
width of columns in output tables. By default, with DEFAULT, column
width varies based on content. Otherwise, specify a number for either
or both of these settings. If both are specified, MAXCOLWIDTH must be
greater than or equal to MINCOLWIDTH. The default unit, or with
UNITS=POINTS, is points (1/72 inch), or specify UNITS=INCHES to use
inches or UNITS=CM for centimeters. PSPP does not currently honor any
of these settings.
By default, or with EMPTY=ZERO, zero values are displayed in their
usual format. Use EMPTY=BLANK to use an empty cell instead, or
EMPTY="string" to use the specified string.
By default, missing values are displayed as ., the same as in other
tables. Specify MISSING="string" to instead use a custom string.
Display of Variable Labels
/VLABELS
VARIABLES=variables
DISPLAY={DEFAULT | NAME | LABEL | BOTH | NONE}
The VLABELS subcommand, which must precede the first TABLE
subcommand, controls display of variable labels in all the output
tables. VLABELS is optional. It may appear multiple times to adjust
settings for different variables.
VARIABLES and DISPLAY are required. The value of DISPLAY
controls how variable labels are displayed for the variables listed on
VARIABLES. The supported values are:
-
DEFAULT
Use the setting fromSET TVARS). -
NAME
Show only a variable name. -
LABEL
Show only a variable label. -
BOTH
Show variable name and label. -
NONE
Show nothing.
Missing Value Treatment
The TABLE subcommand on CTABLES specifies two different kinds of
variables: variables that divide tables into cells (which are always
categorical) and variables being summarized (which may be categorical or
scale). PSPP treats missing values differently in each kind of
variable, as described in the sections below.
Missing Values for Cell-Defining Variables
For variables that divide tables into cells, per-variable category options, as described in Per-Variable Category Options, determine which data is analyzed. If any of the categories for such a variable would exclude a case, then that case is not included.
As an example, consider the following entirely artificial dataset, in
which x and y are categorical variables with missing value 9, and
z is scale:
Data List
+─+─+─────────+
│x│y│ z │
+─+─+─────────+
│1│1│ 1│
│1│2│ 10│
│1│9│ 100│
│2│1│ 1000│
│2│2│ 10000│
│2│9│ 100000│
│9│1│ 1000000│
│9│2│ 10000000│
│9│9│100000000│
+─+─+─────────+
Using x and y to define cells, and summarizing z, by default
PSPP omits all the cases that have x or y (or both) missing:
CTABLES /TABLE x > y > z [SUM].
Custom Tables
+─────────+─────+
│ │ Sum │
+─────────+─────+
│x 1 y 1 z│ 1│
│ ────+─────+
│ 2 z│ 10│
│ ────────+─────+
│ 2 y 1 z│ 1000│
│ ────+─────+
│ 2 z│10000│
+─────────+─────+
If, however, we add CATEGORIES specifications to include missing
values for y or for x and y, the output table includes them, like
so:
CTABLES /TABLE x > y > z [SUM] /CATEGORIES VARIABLES=y MISSING=INCLUDE.
CTABLES /TABLE x > y > z [SUM] /CATEGORIES VARIABLES=x y MISSING=INCLUDE.
Custom Tables
+─────────+──────+
│ │ Sum │
+─────────+──────+
│x 1 y 1 z│ 1│
│ ────+──────+
│ 2 z│ 10│
│ ────+──────+
│ 9 z│ 100│
│ ────────+──────+
│ 2 y 1 z│ 1000│
│ ────+──────+
│ 2 z│ 10000│
│ ────+──────+
│ 9 z│100000│
+─────────+──────+
Custom Tables
+─────────+─────────+
│ │ Sum │
+─────────+─────────+
│x 1 y 1 z│ 1│
│ ────+─────────+
│ 2 z│ 10│
│ ────+─────────+
│ 9 z│ 100│
│ ────────+─────────+
│ 2 y 1 z│ 1000│
│ ────+─────────+
│ 2 z│ 10000│
│ ────+─────────+
│ 9 z│ 100000│
│ ────────+─────────+
│ 9 y 1 z│ 1000000│
│ ────+─────────+
│ 2 z│ 10000000│
│ ────+─────────+
│ 9 z│100000000│
+─────────+─────────+
Missing Values for Summary Variables
For summary variables, values that are valid and in included categories are analyzed, and values that are missing or in excluded categories are not analyzed, with the following exceptions:
-
The
VALIDNsummary functions (VALIDN,EVALIDN,UVALIDN,areaPCT.VALIDN, andUareaPCT.VALIDN) only count valid values in included categories (not missing values in included categories). -
The
TOTALNsummary functions (TOTALN,ETOTALN,UTOTALN,areaPCT.TOTALN), andUareaPCT.TOTALNcount all values (valid and missing) in included categories and missing (but not valid) values in excluded categories.
For categorical variables, system-missing values are never in included categories. For scale variables, there is no notion of included and excluded categories, so all values are effectively included.
The following table provides another view of the above rules:
VALIDN | other | TOTALN | |
|---|---|---|---|
| Categorical variables: | |||
| Valid values in included categories | yes | yes | yes |
| Missing values in included categories | -- | yes | yes |
| Missing values in excluded categories | -- | -- | yes |
| Valid values in excluded categories | -- | -- | -- |
| Scale variables: | |||
| Valid values | yes | yes | yes |
| User- or system-missing values | -- | yes | yes |
Scale Missing Values
/SMISSING {VARIABLE | LISTWISE}
The SMISSING subcommand, which must precede the first TABLE
subcommand, controls treatment of missing values for scalar variables in
producing all the output tables. SMISSING is optional.
With SMISSING=VARIABLE, which is the default, missing values are
excluded on a variable-by-variable basis. With SMISSING=LISTWISE,
when stacked scalar variables are nested together with a categorical
variable, a missing value for any of the scalar variables causes the
case to be excluded for all of them.
As an example, consider the following dataset, in which x is a
categorical variable and y and z are scale:
Data List
+─+─────+─────+
│x│ y │ z │
+─+─────+─────+
│1│ .│40.00│
│1│10.00│50.00│
│1│20.00│60.00│
│1│30.00│ .│
+─+─────+─────+
With the default missing-value treatment, x's mean is 20, based on the
values 10, 20, and 30, and y's mean is 50, based on 40, 50, and 60:
CTABLES /TABLE (y + z) > x.
Custom Tables
+─────+─────+
│ │ Mean│
+─────+─────+
│y x 1│20.00│
+─────+─────+
│z x 1│50.00│
+─────+─────+
By adding SMISSING=LISTWISE, only cases where y and z are both
non-missing are considered, so x's mean becomes 15, as the average of
10 and 20, and y's mean becomes 55, the average of 50 and 60:
CTABLES /SMISSING LISTWISE /TABLE (y + z) > x.
Custom Tables
+─────+─────+
│ │ Mean│
+─────+─────+
│y x 1│15.00│
+─────+─────+
│z x 1│55.00│
+─────+─────+
Even with SMISSING=LISTWISE, if y and z are separately nested with
x, instead of using a single > operator, missing values revert to
being considered on a variable-by-variable basis:
CTABLES /SMISSING LISTWISE /TABLE (y > x) + (z > x).
Custom Tables
+─────+─────+
│ │ Mean│
+─────+─────+
│y x 1│20.00│
+─────+─────+
│z x 1│50.00│
+─────+─────+
Computed Categories
/PCOMPUTE &postcompute=EXPR(expression)
/PPROPERTIES &postcompute...
[LABEL=string]
[FORMAT=[summary format]...]
[HIDESOURCECATS={NO | YES}
"Computed categories", also called "postcomputes", are categories
created using arithmetic on categories obtained from the data. The
PCOMPUTE subcommand creates a postcompute, which may then be used on
CATEGORIES within an explicit category
list. Optionally, PPROPERTIES refines how
a postcompute is displayed. The following sections provide the
details.
PCOMPUTE
/PCOMPUTE &postcompute=EXPR(expression)
The PCOMPUTE subcommand, which must precede the first TABLE
command, defines computed categories. It is optional and may be used
any number of times to define multiple postcomputes.
Each PCOMPUTE defines one postcompute. Its syntax consists of a
name to identify the postcompute as a PSPP identifier prefixed by &,
followed by = and a postcompute expression enclosed in EXPR(...). A
postcompute expression consists of:
-
[category]
This form evaluates to the summary statistic for category, e.g.[1]evaluates to the value of the summary statistic associated with category 1. The category may be a number, a quoted string, or a quoted time or date value. All of the categories for a given postcompute must have the same form. The category must appear in all theCATEGORIESlist in which the postcompute is used. -
[min THRU max]
[LO THRU max]
[min THRU HI]
MISSING
OTHERNM
These forms evaluate to the summary statistics for a category specified with the same syntax, as described in a previous section (see Explicit Category List). The category must appear in all theCATEGORIESlist in which the postcompute is used. -
SUBTOTAL
The summary statistic for the subtotal category. This form is allowed only if theCATEGORIESlists that include this postcompute have exactly one subtotal. -
SUBTOTAL[index]
The summary statistic for subtotal category index, where 1 is the first subtotal, 2 is the second, and so on. This form may be used forCATEGORIESlists with any number of subtotals. -
TOTAL
The summary statistic for the total. TheCATEGORIESlsits that include this postcompute must have a total enabled. -
a + b
a - b
a * b
a / b
a ** bThese forms perform arithmetic on the values of postcompute expressions a and b. The usual operator precedence rules apply. -
number
Numeric constants may be used in postcompute expressions. -
(a)
Parentheses override operator precedence.
A postcompute is not associated with any particular variable.
Instead, it may be referenced within CATEGORIES for any suitable
variable (e.g. only a string variable is suitable for a postcompute
expression that refers to a string category, only a variable with
subtotals for an expression that refers to subtotals, ...).
Normally a named postcompute is defined only once, but if a later
PCOMPUTE redefines a postcompute with the same name as an earlier one,
the later one take precedence.
The following syntax and output shows how PCOMPUTE can compute a
total over subtotals, summing the "Frequent Drivers" and "Infrequent
Drivers" subtotals to form an "All Drivers" postcompute. It also
shows how to calculate and display a percentage, in this case the
percentage of valid responses that report never driving. It uses
PPROPERTIES to display the latter in PCT format.
CTABLES
/PCOMPUTE &all_drivers=EXPR([1 THRU 2] + [3 THRU 4])
/PPROPERTIES &all_drivers LABEL='All Drivers'
/PCOMPUTE &pct_never=EXPR([5] / ([1 THRU 2] + [3 THRU 4] + [5]) * 100)
/PPROPERTIES &pct_never LABEL='% Not Drivers' FORMAT=COUNT PCT40.1
/TABLE=freqOfDriving BY gender
/CATEGORIES VARIABLES=freqOfDriving
[1 THRU 2, SUBTOTAL='Frequent Drivers',
3 THRU 4, SUBTOTAL='Infrequent Drivers',
&all_drivers, 5, &pct_never,
MISSING, SUBTOTAL='Not Drivers or Missing'].
Custom Tables
+────────────────────────────────────────────────────────────────+────────────+
│ │S3a. GENDER:│
│ +─────+──────+
│ │ Male│Female│
│ +─────+──────+
│ │Count│ Count│
+────────────────────────────────────────────────────────────────+─────+──────+
│ 1. How often do you usually drive a car or Every day │ 2305│ 2362│
│other motor vehicle? Several days a week │ 440│ 834│
│ Frequent Drivers │ 2745│ 3196│
│ Once a week or less │ 125│ 236│
│ Only certain times a│ 58│ 72│
│ year │ │ │
│ Infrequent Drivers │ 183│ 308│
│ All Drivers │ 2928│ 3504│
│ Never │ 192│ 348│
│ % Not Drivers │ 6.2%│ 9.0%│
│ Don't know │ 3│ 5│
│ Refused │ 9│ 10│
│ Not Drivers or │ 204│ 363│
│ Missing │ │ │
+────────────────────────────────────────────────────────────────+─────+──────+
PPROPERTIES
/PPROPERTIES &postcompute...
[LABEL=string]
[FORMAT=[summary format]...]
[HIDESOURCECATS={NO | YES}
The PPROPERTIES subcommand, which must appear before TABLE, sets
properties for one or more postcomputes defined on prior PCOMPUTE
subcommands. The subcommand syntax begins with the list of
postcomputes, each prefixed with & as specified on PCOMPUTE.
All of the settings on PPROPERTIES are optional. Use LABEL to
set the label shown for the postcomputes in table output. The default
label for a postcompute is the expression used to define it.
A postcompute always uses same summary functions as the variable
whose categories contain it, but FORMAT allows control over the format
used to display their values. It takes a list of summary function names
and format specifiers.
By default, or with HIDESOURCECATS=NO, categories referred to by
computed categories are displayed like other categories. Use
HIDESOURCECATS=YES to hide them.
The previous section provides an example for PPROPERTIES.
Effective Weight
/WEIGHT VARIABLE=variable
The WEIGHT subcommand is optional and must appear before TABLE.
If it appears, it must name a numeric variable, known as the
"effective weight" or "adjustment weight". The effective weight
variable stands in for the dictionary's weight variable,
if any, in most calculations in CTABLES. The only exceptions are
the COUNT, TOTALN, and VALIDN summary functions, which use the
dictionary weight instead.
Weights obtained from the PSPP dictionary are rounded to the nearest integer at the case level. Effective weights are not rounded. Regardless of the weighting source, PSPP does not analyze cases with zero, missing, or negative effective weights.
Hiding Small Counts
/HIDESMALLCOUNTS COUNT=count
The HIDESMALLCOUNTS subcommand is optional. If it specified, then
COUNT, ECOUNT, and UCOUNT values in output tables less than the
value of count are shown as <count instead of their true values. The
value of count must be an integer and must be at least 2.
The following syntax and example shows how to use HIDESMALLCOUNTS:
CTABLES /HIDESMALLCOUNTS COUNT=10 /TABLE placeOfLastDrinkBeforeDrive.
Custom Tables
+───────────────────────────────────────────────────────────────────────+─────+
│ │Count│
+───────────────────────────────────────────────────────────────────────+─────+
│37. Please think about the most recent occasion that Other (list) │<10 │
│you drove within two hours of drinking alcoholic Your home │ 182│
│beverages. Where did you drink on that occasion? Friend's home │ 264│
│ Bar/Tavern/Club │ 279│
│ Restaurant │ 495│
│ Work │ 21│
│ Bowling alley │<10 │
│ Hotel/Motel │<10 │
│ Country Club/ │ 17│
│ Golf course │ │
│ Drank in the │<10 │
│ car/On the road │ │
│ Sporting event │ 15│
│ Movie theater │<10 │
│ Shopping/Store/ │<10 │
│ Grocery store │ │
│ Wedding │ 15│
│ Party at someone│ 81│
│ else's home │ │
│ Park/picnic │ 14│
│ Party at your │<10 │
│ house │ │
+───────────────────────────────────────────────────────────────────────+─────+
-
This is not necessarily a meaningful table. To make it easier to read, short variable labels are used. ↩
FACTOR
FACTOR {
VARIABLES=VAR_LIST,
MATRIX IN ({CORR,COV}={*,FILE_SPEC})
}
[ /METHOD = {CORRELATION, COVARIANCE} ]
[ /ANALYSIS=VAR_LIST ]
[ /EXTRACTION={PC, PAF}]
[ /ROTATION={VARIMAX, EQUAMAX, QUARTIMAX, PROMAX[(K)], NOROTATE}]
[ /PRINT=[INITIAL] [EXTRACTION] [ROTATION] [UNIVARIATE] [CORRELATION] [COVARIANCE] [DET] [KMO] [AIC] [SIG] [ALL] [DEFAULT] ]
[ /PLOT=[EIGEN] ]
[ /FORMAT=[SORT] [BLANK(N)] [DEFAULT] ]
[ /CRITERIA=[FACTORS(N)] [MINEIGEN(L)] [ITERATE(M)] [ECONVERGE (DELTA)] [DEFAULT] ]
[ /MISSING=[{LISTWISE, PAIRWISE}] [{INCLUDE, EXCLUDE}] ]
The FACTOR command performs Factor Analysis or Principal Axis
Factoring on a dataset. It may be used to find common factors in the
data or for data reduction purposes.
The VARIABLES subcommand is required (unless the MATRIX IN
subcommand is used). It lists the variables which are to partake in the
analysis. (The ANALYSIS subcommand may optionally further limit the
variables that participate; it is useful primarily in conjunction with
MATRIX IN.)
If MATRIX IN instead of VARIABLES is specified, then the analysis
is performed on a pre-prepared correlation or covariance matrix file
instead of on individual data cases. Typically the matrix
file will have been generated by MATRIX DATA or provided by a third party. If specified,
MATRIX IN must be followed by COV or CORR, then by = and
FILE_SPEC all in parentheses. FILE_SPEC may either be an
asterisk, which indicates the currently loaded dataset, or it may be a
file name to be loaded. See MATRIX DATA, for the
expected format of the file.
The /EXTRACTION subcommand is used to specify the way in which
factors (components) are extracted from the data. If PC is specified,
then Principal Components Analysis is used. If PAF is specified, then
Principal Axis Factoring is used. By default Principal Components
Analysis is used.
The /ROTATION subcommand is used to specify the method by which the
extracted solution is rotated. Three orthogonal rotation methods are
available: VARIMAX (which is the default), EQUAMAX, and QUARTIMAX.
There is one oblique rotation method, viz: PROMAX. Optionally you may
enter the power of the promax rotation K, which must be enclosed in
parentheses. The default value of K is 5. If you don't want any
rotation to be performed, the word NOROTATE prevents the command from
performing any rotation on the data.
The /METHOD subcommand should be used to determine whether the
covariance matrix or the correlation matrix of the data is to be
analysed. By default, the correlation matrix is analysed.
The /PRINT subcommand may be used to select which features of the
analysis are reported:
UNIVARIATEA table of mean values, standard deviations and total weights are printed.INITIALInitial communalities and eigenvalues are printed.EXTRACTIONExtracted communalities and eigenvalues are printed.ROTATIONRotated communalities and eigenvalues are printed.CORRELATIONThe correlation matrix is printed.COVARIANCEThe covariance matrix is printed.DETThe determinant of the correlation or covariance matrix is printed.AICThe anti-image covariance and anti-image correlation matrices are printed.KMOThe Kaiser-Meyer-Olkin measure of sampling adequacy and the Bartlett test of sphericity is printed.SIGThe significance of the elements of correlation matrix is printed.ALLAll of the above are printed.DEFAULTIdentical toINITIALandEXTRACTION.
If /PLOT=EIGEN is given, then a "Scree" plot of the eigenvalues is
printed. This can be useful for visualizing the factors and deciding
which factors (components) should be retained.
The /FORMAT subcommand determined how data are to be displayed in
loading matrices. If SORT is specified, then the variables are sorted
in descending order of significance. If BLANK(N) is specified, then
coefficients whose absolute value is less than N are not printed. If
the keyword DEFAULT is specified, or if no /FORMAT subcommand is
specified, then no sorting is performed, and all coefficients are
printed.
You can use the /CRITERIA subcommand to specify how the number of
extracted factors (components) are chosen. If FACTORS(N) is
specified, where N is an integer, then N factors are extracted.
Otherwise, the MINEIGEN setting is used. MINEIGEN(L) requests that
all factors whose eigenvalues are greater than or equal to L are
extracted. The default value of L is 1. The ECONVERGE setting has
effect only when using iterative algorithms for factor extraction (such
as Principal Axis Factoring). ECONVERGE(DELTA) specifies that
iteration should cease when the maximum absolute value of the
communality estimate between one iteration and the previous is less than
DELTA. The default value of DELTA is 0.001.
The ITERATE(M) may appear any number of times and is used for two
different purposes. It is used to set the maximum number of iterations
(M) for convergence and also to set the maximum number of iterations for
rotation. Whether it affects convergence or rotation depends upon which
subcommand follows the ITERATE subcommand. If EXTRACTION follows,
it affects convergence. If ROTATION follows, it affects rotation. If
neither ROTATION nor EXTRACTION follow a ITERATE subcommand, then
the entire subcommand is ignored. The default value of M is 25.
The MISSING subcommand determines the handling of missing
variables. If INCLUDE is set, then user-missing values are included
in the calculations, but system-missing values are not. If EXCLUDE is
set, which is the default, user-missing values are excluded as well as
system-missing values. This is the default. If LISTWISE is set, then
the entire case is excluded from analysis whenever any variable
specified in the VARIABLES subcommand contains a missing value.
If PAIRWISE is set, then a case is considered missing only if
either of the values for the particular coefficient are missing. The
default is LISTWISE.
GLM
GLM DEPENDENT_VARS BY FIXED_FACTORS
[/METHOD = SSTYPE(TYPE)]
[/DESIGN = INTERACTION_0 [INTERACTION_1 [... INTERACTION_N]]]
[/INTERCEPT = {INCLUDE|EXCLUDE}]
[/MISSING = {INCLUDE|EXCLUDE}]
The GLM procedure can be used for fixed effects factorial Anova.
The DEPENDENT_VARS are the variables to be analysed. You may analyse
several variables in the same command in which case they should all
appear before the BY keyword.
The FIXED_FACTORS list must be one or more categorical variables.
Normally it does not make sense to enter a scalar variable in the
FIXED_FACTORS and doing so may cause PSPP to do a lot of unnecessary
processing.
The METHOD subcommand is used to change the method for producing
the sums of squares. Available values of TYPE are 1, 2 and 3. The
default is type 3.
You may specify a custom design using the DESIGN subcommand. The
design comprises a list of interactions where each interaction is a list
of variables separated by a *. For example the command
GLM subject BY sex age_group race
/DESIGN = age_group sex group age_group*sex age_group*race
specifies the model
subject = age_group + sex + race + age_group×sex + age_group×race
If no DESIGN subcommand is specified, then the
default is all possible combinations of the fixed factors. That is to
say
GLM subject BY sex age_group race
implies the model
subject = age_group + sex + race + age_group×sex + age_group×race + sex×race + age_group×sex×race
The MISSING subcommand determines the handling of missing variables.
If INCLUDE is set then, for the purposes of GLM analysis, only
system-missing values are considered to be missing; user-missing
values are not regarded as missing. If EXCLUDE is set, which is the
default, then user-missing values are considered to be missing as well
as system-missing values. A case for which any dependent variable or
any factor variable has a missing value is excluded from the analysis.
LOGISTIC REGRESSION
LOGISTIC REGRESSION [VARIABLES =] DEPENDENT_VAR WITH PREDICTORS
[/CATEGORICAL = CATEGORICAL_PREDICTORS]
[{/NOCONST | /ORIGIN | /NOORIGIN }]
[/PRINT = [SUMMARY] [DEFAULT] [CI(CONFIDENCE)] [ALL]]
[/CRITERIA = [BCON(MIN_DELTA)] [ITERATE(MAX_INTERATIONS)]
[LCON(MIN_LIKELIHOOD_DELTA)] [EPS(MIN_EPSILON)]
[CUT(CUT_POINT)]]
[/MISSING = {INCLUDE|EXCLUDE}]
Bivariate Logistic Regression is used when you want to explain a dichotomous dependent variable in terms of one or more predictor variables.
The minimum command is
LOGISTIC REGRESSION y WITH x1 x2 ... xN.
Here, y is the dependent variable, which must be dichotomous and
x1 through xN are the predictor variables whose coefficients the
procedure estimates.
By default, a constant term is included in the model. Hence, the full model is $${\bf y} = b_0 + b_1 {\bf x_1} + b_2 {\bf x_2} + \dots + b_n {\bf x_n}.$$
Predictor variables which are categorical in nature should be listed
on the /CATEGORICAL subcommand. Simple variables as well as
interactions between variables may be listed here.
If you want a model without the constant term b_0, use the keyword
/ORIGIN. /NOCONST is a synonym for /ORIGIN.
An iterative Newton-Raphson procedure is used to fit the model. The
/CRITERIA subcommand is used to specify the stopping criteria of the
procedure, and other parameters. The value of CUT_POINT is used in the
classification table. It is the threshold above which predicted values
are considered to be 1. Values of CUT_POINT must lie in the range
[0,1]. During iterations, if any one of the stopping criteria are
satisfied, the procedure is considered complete. The stopping criteria
are:
- The number of iterations exceeds
MAX_ITERATIONS. The default value ofMAX_ITERATIONSis 20. - The change in the all coefficient estimates are less than
MIN_DELTA. The default value ofMIN_DELTAis 0.001. - The magnitude of change in the likelihood estimate is less than
MIN_LIKELIHOOD_DELTA. The default value ofMIN_LIKELIHOOD_DELTAis zero. This means that this criterion is disabled. - The differential of the estimated probability for all cases is less
than
MIN_EPSILON. In other words, the probabilities are close to zero or one. The default value ofMIN_EPSILONis 0.00000001.
The PRINT subcommand controls the display of optional statistics.
Currently there is one such option, CI, which indicates that the
confidence interval of the odds ratio should be displayed as well as its
value. CI should be followed by an integer in parentheses, to
indicate the confidence level of the desired confidence interval.
The MISSING subcommand determines the handling of missing
variables. If INCLUDE is set, then user-missing values are included
in the calculations, but system-missing values are not. If EXCLUDE is
set, which is the default, user-missing values are excluded as well as
system-missing values. This is the default.
MEANS
MEANS [TABLES =]
{VAR_LIST}
[ BY {VAR_LIST} [BY {VAR_LIST} [BY {VAR_LIST} ... ]]]
[ /{VAR_LIST}
[ BY {VAR_LIST} [BY {VAR_LIST} [BY {VAR_LIST} ... ]]] ]
[/CELLS = [MEAN] [COUNT] [STDDEV] [SEMEAN] [SUM] [MIN] [MAX] [RANGE]
[VARIANCE] [KURT] [SEKURT]
[SKEW] [SESKEW] [FIRST] [LAST]
[HARMONIC] [GEOMETRIC]
[DEFAULT]
[ALL]
[NONE] ]
[/MISSING = [INCLUDE] [DEPENDENT]]
You can use the MEANS command to calculate the arithmetic mean and
similar statistics, either for the dataset as a whole or for categories
of data.
The simplest form of the command is
MEANS V.
which calculates the mean, count and standard deviation for V. If you specify a grouping variable, for example
MEANS V BY G.
then the means, counts and standard deviations for V after having been grouped by G are calculated. Instead of the mean, count and standard deviation, you could specify the statistics in which you are interested:
MEANS X Y BY G
/CELLS = HARMONIC SUM MIN.
This example calculates the harmonic mean, the sum and the minimum values of X and Y grouped by G.
The CELLS subcommand specifies which statistics to calculate. The
available statistics are:
MEAN: The arithmetic mean.COUNT: The count of the values.STDDEV: The standard deviation.SEMEAN: The standard error of the mean.SUM: The sum of the values.MIN: The minimum value.MAX: The maximum value.RANGE: The difference between the maximum and minimum values.VARIANCE: The variance.FIRST: The first value in the category.LAST: The last value in the category.SKEW: The skewness.SESKEW: The standard error of the skewness.KURT: The kurtosisSEKURT: The standard error of the kurtosis.HARMONIC: The harmonic mean.GEOMETRIC: The geometric mean.
In addition, three special keywords are recognized:
DEFAULT: This is the same asMEAN COUNT STDDEV.ALL: All of the above statistics are calculated.NONE: No statistics are calculated (only a summary is shown).
More than one "table" can be specified in a single command. Each
table is separated by a /. For example
MEANS TABLES =
c d e BY x
/a b BY x y
/f BY y BY z.
has three tables (the TABLE = is optional). The first table has
three dependent variables c, d, and e and a single categorical
variable x. The second table has two dependent variables a and
b, and two categorical variables x and y. The third table has a
single dependent variable f and a categorical variable formed by the
combination of y and Z.
By default values are omitted from the analysis only if missing
values (either system missing or user missing) for any of the variables
directly involved in their calculation are encountered. This behaviour
can be modified with the /MISSING subcommand. Three options are
possible: TABLE, INCLUDE and DEPENDENT.
/MISSING = INCLUDE says that user missing values, either in the
dependent variables or in the categorical variables should be taken at
their face value, and not excluded.
/MISSING = DEPENDENT says that user missing values, in the
dependent variables should be taken at their face value, however cases
which have user missing values for the categorical variables should be
omitted from the calculation.
Example
The dataset in repairs.sav contains the mean time between failures
(mtbf) for a sample of artifacts produced by different factories and
trialed under different operating conditions. Since there are four
combinations of categorical variables, by simply looking at the list
of data, it would be hard to how the scores vary for each category.
The syntax below shows one way of tabulating the mtbf in a way which
is easier to understand.
get file='repairs.sav'.
means tables = mtbf
by factory by environment.
The results are shown below. The figures shown indicate the mean,
standard deviation and number of samples in each category. These
figures however do not indicate whether the results are statistically
significant. For that, you would need to use the procedures ONEWAY,
GLM or T-TEST depending on the hypothesis being tested.
Case Processing Summary
┌────────────────────────────┬───────────────────────────────┐
│ │ Cases │
│ ├──────────┬─────────┬──────────┤
│ │ Included │ Excluded│ Total │
│ ├──┬───────┼─┬───────┼──┬───────┤
│ │ N│Percent│N│Percent│ N│Percent│
├────────────────────────────┼──┼───────┼─┼───────┼──┼───────┤
│mtbf * factory * environment│30│ 100.0%│0│ .0%│30│ 100.0%│
└────────────────────────────┴──┴───────┴─┴───────┴──┴───────┘
Report
┌────────────────────────────────────────────┬─────┬──┬──────────────┐
│Manufacturing facility Operating Environment│ Mean│ N│Std. Deviation│
├────────────────────────────────────────────┼─────┼──┼──────────────┤
│0 Temperate │ 7.26│ 9│ 2.57│
│ Tropical │ 7.47│ 7│ 2.68│
│ Total │ 7.35│16│ 2.53│
├────────────────────────────────────────────┼─────┼──┼──────────────┤
│1 Temperate │13.38│ 6│ 7.77│
│ Tropical │ 8.20│ 8│ 8.39│
│ Total │10.42│14│ 8.26│
├────────────────────────────────────────────┼─────┼──┼──────────────┤
│Total Temperate │ 9.71│15│ 5.91│
│ Tropical │ 7.86│15│ 6.20│
│ Total │ 8.78│30│ 6.03│
└────────────────────────────────────────────┴─────┴──┴──────────────┘
PSPP does not limit the number of variables for which you can
calculate statistics, nor number of categorical variables per layer,
nor the number of layers. However, running MEANS on a large number
of variables, or with categorical variables containing a large number
of distinct values, may result in an extremely large output, which
will not be easy to interpret. So you should consider carefully which
variables to select for participation in the analysis.
NPAR TESTS
NPAR TESTS
nonparametric test subcommands
.
.
.
[ /STATISTICS={DESCRIPTIVES} ]
[ /MISSING={ANALYSIS, LISTWISE} {INCLUDE, EXCLUDE} ]
[ /METHOD=EXACT [ TIMER [(N)] ] ]
NPAR TESTS performs nonparametric tests. Nonparametric tests make
very few assumptions about the distribution of the data. One or more
tests may be specified by using the corresponding subcommand. If the
/STATISTICS subcommand is also specified, then summary statistics
are produces for each variable that is the subject of any test.
Certain tests may take a long time to execute, if an exact figure is
required. Therefore, by default asymptotic approximations are used
unless the subcommand /METHOD=EXACT is specified. Exact tests give
more accurate results, but may take an unacceptably long time to
perform. If the TIMER keyword is used, it sets a maximum time,
after which the test is abandoned, and a warning message printed. The
time, in minutes, should be specified in parentheses after the TIMER
keyword. If the TIMER keyword is given without this figure, then a
default value of 5 minutes is used.
- Binomial test
- Chi-square Test
- Cochran Q Test
- Friedman Test
- Kendall's W Test
- Kolmogorov-Smirnov Test
- Kruskal-Wallis Test
- Mann-Whitney U Test
- McNemar Test
- Median Test
- Runs Test
- Sign Test
- Wilcoxon Matched Pairs Signed Ranks Test
Binomial test
[ /BINOMIAL[(P)]=VAR_LIST[(VALUE1[, VALUE2)] ] ]
The /BINOMIAL subcommand compares the observed distribution of a
dichotomous variable with that of a binomial distribution. The variable
P specifies the test proportion of the binomial distribution. The
default value of 0.5 is assumed if P is omitted.
If a single value appears after the variable list, then that value is used as the threshold to partition the observed values. Values less than or equal to the threshold value form the first category. Values greater than the threshold form the second category.
If two values appear after the variable list, then they are used as the values which a variable must take to be in the respective category. Cases for which a variable takes a value equal to neither of the specified values, take no part in the test for that variable.
If no values appear, then the variable must assume dichotomous values. If more than two distinct, non-missing values for a variable under test are encountered then an error occurs.
If the test proportion is equal to 0.5, then a two tailed test is reported. For any other test proportion, a one tailed test is reported. For one tailed tests, if the test proportion is less than or equal to the observed proportion, then the significance of observing the observed proportion or more is reported. If the test proportion is more than the observed proportion, then the significance of observing the observed proportion or less is reported. That is to say, the test is always performed in the observed direction.
PSPP uses a very precise approximation to the gamma function to compute the binomial significance. Thus, exact results are reported even for very large sample sizes.
Chi-square Test
[ /CHISQUARE=VAR_LIST[(LO,HI)] [/EXPECTED={EQUAL|F1, F2 ... FN}] ]
The /CHISQUARE subcommand produces a chi-square statistic for the
differences between the expected and observed frequencies of the
categories of a variable. Optionally, a range of values may appear
after the variable list. If a range is given, then non-integer values
are truncated, and values outside the specified range are excluded
from the analysis.
The /EXPECTED subcommand specifies the expected values of each
category. There must be exactly one non-zero expected value, for each
observed category, or the EQUAL keyword must be specified. You may
use the notation N*F to specify N consecutive expected categories all
taking a frequency of F. The frequencies given are proportions, not
absolute frequencies. The sum of the frequencies need not be 1. If no
/EXPECTED subcommand is given, then equal frequencies are expected.
Chi-square Example
A researcher wishes to investigate whether there are an equal number of
persons of each sex in a population. The sample chosen for invesigation
is that from the physiology.sav dataset. The null hypothesis for the
test is that the population comprises an equal number of males and
females. The analysis is performed as shown below:
get file='physiology.sav'.
npar test
/chisquare=sex.
There is only one test variable: sex. The other variables in the dataset are ignored.
In the output, shown below, the summary box shows that in the sample, there are more males than females. However the significance of chi-square result is greater than 0.05—the most commonly accepted p-value—and therefore there is not enough evidence to reject the null hypothesis and one must conclude that the evidence does not indicate that there is an imbalance of the sexes in the population.
Sex of subject
┌──────┬──────────┬──────────┬────────┐
│Value │Observed N│Expected N│Residual│
├──────┼──────────┼──────────┼────────┤
│Male │ 22│ 20.00│ 2.00│
│Female│ 18│ 20.00│ ─2.00│
│Total │ 40│ │ │
└──────┴──────────┴──────────┴────────┘
Test Statistics
┌──────────────┬──────────┬──┬───────────┐
│ │Chi─square│df│Asymp. Sig.│
├──────────────┼──────────┼──┼───────────┤
│Sex of subject│ .40│ 1│ .527│
└──────────────┴──────────┴──┴───────────┘
Cochran Q Test
[ /COCHRAN = VAR_LIST ]
The Cochran Q test is used to test for differences between three or
more groups. The data for VAR_LIST in all cases must assume exactly
two distinct values (other than missing values).
The value of Q is displayed along with its asymptotic significance based on a chi-square distribution.
Friedman Test
[ /FRIEDMAN = VAR_LIST ]
The Friedman test is used to test for differences between repeated measures when there is no indication that the distributions are normally distributed.
A list of variables which contain the measured data must be given. The procedure prints the sum of ranks for each variable, the test statistic and its significance.
Kendall's W Test
[ /KENDALL = VAR_LIST ]
The Kendall test investigates whether an arbitrary number of related samples come from the same population. It is identical to the Friedman test except that the additional statistic W, Kendall's Coefficient of Concordance is printed. It has the range [0,1]—a value of zero indicates no agreement between the samples whereas a value of unity indicates complete agreement.
Kolmogorov-Smirnov Test
[ /KOLMOGOROV-SMIRNOV ({NORMAL [MU, SIGMA], UNIFORM [MIN, MAX], POISSON [LAMBDA], EXPONENTIAL [SCALE] }) = VAR_LIST ]
The one sample Kolmogorov-Smirnov subcommand is used to test whether or not a dataset is drawn from a particular distribution. Four distributions are supported: normal, uniform, Poisson and exponential.
Ideally you should provide the parameters of the distribution against
which you wish to test the data. For example, with the normal
distribution the mean (MU) and standard deviation (SIGMA) should
be given; with the uniform distribution, the minimum (MIN) and
maximum (MAX) value should be provided. However, if the parameters
are omitted they are imputed from the data. Imputing the parameters
reduces the power of the test so should be avoided if possible.
In the following example, two variables score and age are tested to
see if they follow a normal distribution with a mean of 3.5 and a
standard deviation of 2.0.
NPAR TESTS
/KOLMOGOROV-SMIRNOV (NORMAL 3.5 2.0) = score age.
If the variables need to be tested against different distributions,
then a separate subcommand must be used. For example the following
syntax tests score against a normal distribution with mean of 3.5 and
standard deviation of 2.0 whilst age is tested against a normal
distribution of mean 40 and standard deviation 1.5.
NPAR TESTS
/KOLMOGOROV-SMIRNOV (NORMAL 3.5 2.0) = score
/KOLMOGOROV-SMIRNOV (NORMAL 40 1.5) = age.
The abbreviated subcommand K-S may be used in place of
KOLMOGOROV-SMIRNOV.
Kruskal-Wallis Test
[ /KRUSKAL-WALLIS = VAR_LIST BY VAR (LOWER, UPPER) ]
The Kruskal-Wallis test is used to compare data from an arbitrary
number of populations. It does not assume normality. The data to be
compared are specified by VAR_LIST. The categorical variable
determining the groups to which the data belongs is given by VAR.
The limits LOWER and UPPER specify the valid range of VAR. If
UPPER is smaller than LOWER, the PSPP will assume their values to
be reversed. Any cases for which VAR falls outside [LOWER, UPPER]
are ignored.
The mean rank of each group as well as the chi-squared value and
significance of the test are printed. The abbreviated subcommand K-W
may be used in place of KRUSKAL-WALLIS.
Mann-Whitney U Test
[ /MANN-WHITNEY = VAR_LIST BY var (GROUP1, GROUP2) ]
The Mann-Whitney subcommand is used to test whether two groups of
data come from different populations. The variables to be tested should
be specified in VAR_LIST and the grouping variable, that determines to
which group the test variables belong, in VAR. VAR may be either a
string or an alpha variable. GROUP1 and GROUP2 specify the two values
of VAR which determine the groups of the test data. Cases for which the
VAR value is neither GROUP1 or GROUP2 are ignored.
The value of the Mann-Whitney U statistic, the Wilcoxon W, and the
significance are printed. You may abbreviated the subcommand
MANN-WHITNEY to M-W.
McNemar Test
[ /MCNEMAR VAR_LIST [ WITH VAR_LIST [ (PAIRED) ]]]
Use McNemar's test to analyse the significance of the difference between pairs of correlated proportions.
If the WITH keyword is omitted, then tests for all combinations of
the listed variables are performed. If the WITH keyword is given, and
the (PAIRED) keyword is also given, then the number of variables
preceding WITH must be the same as the number following it. In this
case, tests for each respective pair of variables are performed. If the
WITH keyword is given, but the (PAIRED) keyword is omitted, then
tests for each combination of variable preceding WITH against variable
following WITH are performed.
The data in each variable must be dichotomous. If there are more than two distinct variables an error will occur and the test will not be run.
Median Test
[ /MEDIAN [(VALUE)] = VAR_LIST BY VARIABLE (VALUE1, VALUE2) ]
The median test is used to test whether independent samples come from
populations with a common median. The median of the populations against
which the samples are to be tested may be given in parentheses
immediately after the /MEDIAN subcommand. If it is not given, the
median is imputed from the union of all the samples.
The variables of the samples to be tested should immediately follow
the = sign. The keyword BY must come next, and then the grouping
variable. Two values in parentheses should follow. If the first
value is greater than the second, then a 2-sample test is performed
using these two values to determine the groups. If however, the first
variable is less than the second, then a k sample test is conducted
and the group values used are all values encountered which lie in the
range [VALUE1,VALUE2].
Runs Test
[ /RUNS ({MEAN, MEDIAN, MODE, VALUE}) = VAR_LIST ]
The /RUNS subcommand tests whether a data sequence is randomly
ordered.
It works by examining the number of times a variable's value crosses
a given threshold. The desired threshold must be specified within
parentheses. It may either be specified as a number or as one of
MEAN, MEDIAN or MODE. Following the threshold specification comes
the list of variables whose values are to be tested.
The subcommand shows the number of runs, the asymptotic significance based on the length of the data.
Sign Test
[ /SIGN VAR_LIST [ WITH VAR_LIST [ (PAIRED) ]]]
The /SIGN subcommand tests for differences between medians of the
variables listed. The test does not make any assumptions about the
distribution of the data.
If the WITH keyword is omitted, then tests for all combinations of
the listed variables are performed. If the WITH keyword is given, and
the (PAIRED) keyword is also given, then the number of variables
preceding WITH must be the same as the number following it. In this
case, tests for each respective pair of variables are performed. If the
WITH keyword is given, but the (PAIRED) keyword is omitted, then
tests for each combination of variable preceding WITH against variable
following WITH are performed.
Wilcoxon Matched Pairs Signed Ranks Test
[ /WILCOXON VAR_LIST [ WITH VAR_LIST [ (PAIRED) ]]]
The /WILCOXON subcommand tests for differences between medians of
the variables listed. The test does not make any assumptions about the
variances of the samples. It does however assume that the distribution
is symmetrical.
If the WITH keyword is omitted, then tests for all combinations of
the listed variables are performed. If the WITH keyword is given, and
the (PAIRED) keyword is also given, then the number of variables
preceding WITH must be the same as the number following it. In this
case, tests for each respective pair of variables are performed. If the
WITH keyword is given, but the (PAIRED) keyword is omitted, then
tests for each combination of variable preceding WITH against variable
following WITH are performed.
T-TEST
T-TEST
/MISSING={ANALYSIS,LISTWISE} {EXCLUDE,INCLUDE}
/CRITERIA=CI(CONFIDENCE)
(One Sample mode.)
TESTVAL=TEST_VALUE
/VARIABLES=VAR_LIST
(Independent Samples mode.)
GROUPS=var(VALUE1 [, VALUE2])
/VARIABLES=VAR_LIST
(Paired Samples mode.)
PAIRS=VAR_LIST [WITH VAR_LIST [(PAIRED)] ]
The T-TEST procedure outputs tables used in testing hypotheses
about means. It operates in one of three modes:
Each of these modes are described in more detail below. There are two optional subcommands which are common to all modes.
The /CRITERIA subcommand tells PSPP the confidence interval used in
the tests. The default value is 0.95.
The MISSING subcommand determines the handling of missing
variables. If INCLUDE is set, then user-missing values are included
in the calculations, but system-missing values are not. If EXCLUDE is
set, which is the default, user-missing values are excluded as well as
system-missing values. This is the default.
If LISTWISE is set, then the entire case is excluded from analysis
whenever any variable specified in the /VARIABLES, /PAIRS or
/GROUPS subcommands contains a missing value. If ANALYSIS is set,
then missing values are excluded only in the analysis for which they
would be needed. This is the default.
One Sample Mode
The TESTVAL subcommand invokes the One Sample mode. This mode is used
to test a population mean against a hypothesized mean. The value given
to the TESTVAL subcommand is the value against which you wish to test.
In this mode, you must also use the /VARIABLES subcommand to tell PSPP
which variables you wish to test.
Example
A researcher wishes to know whether the weight of persons in a
population is different from the national average. The samples are
drawn from the population under investigation and recorded in the file
physiology.sav. From the Department of Health, she knows that the
national average weight of healthy adults is 76.8kg. Accordingly the
TESTVAL is set to 76.8. The null hypothesis therefore is that the
mean average weight of the population from which the sample was drawn is
76.8kg.
As previously noted, one sample in the dataset contains a weight
value which is clearly incorrect. So this is excluded from the
analysis using the SELECT command.
GET FILE='physiology.sav'.
SELECT IF (weight > 0).
T-TEST TESTVAL = 76.8
/VARIABLES = weight.
The output below shows that the mean of our sample differs from the test value by -1.40kg. However the significance is very high (0.610). So one cannot reject the null hypothesis, and must conclude there is not enough evidence to suggest that the mean weight of the persons in our population is different from 76.8kg.
One─Sample Statistics
┌───────────────────┬──┬─────┬──────────────┬─────────┐
│ │ N│ Mean│Std. Deviation│S.E. Mean│
├───────────────────┼──┼─────┼──────────────┼─────────┤
│Weight in kilograms│39│75.40│ 17.08│ 2.73│
└───────────────────┴──┴─────┴──────────────┴─────────┘
One─Sample Test
┌──────────────┬──────────────────────────────────────────────────────────────┐
│ │ Test Value = 76.8 │
│ ├────┬──┬────────────┬────────────┬────────────────────────────┤
│ │ │ │ │ │ 95% Confidence Interval of │
│ │ │ │ │ │ the Difference │
│ │ │ │ Sig. (2─ │ Mean ├──────────────┬─────────────┤
│ │ t │df│ tailed) │ Difference │ Lower │ Upper │
├──────────────┼────┼──┼────────────┼────────────┼──────────────┼─────────────┤
│Weight in │─.51│38│ .610│ ─1.40│ ─6.94│ 4.13│
│kilograms │ │ │ │ │ │ │
└──────────────┴────┴──┴────────────┴────────────┴──────────────┴─────────────┘
Independent Samples Mode
The GROUPS subcommand invokes Independent Samples mode or 'Groups'
mode. This mode is used to test whether two groups of values have the
same population mean. In this mode, you must also use the /VARIABLES
subcommand to tell PSPP the dependent variables you wish to test.
The variable given in the GROUPS subcommand is the independent
variable which determines to which group the samples belong. The values
in parentheses are the specific values of the independent variable for
each group. If the parentheses are omitted and no values are given, the
default values of 1.0 and 2.0 are assumed.
If the independent variable is numeric, it is acceptable to specify
only one value inside the parentheses. If you do this, cases where the
independent variable is greater than or equal to this value belong to
the first group, and cases less than this value belong to the second
group. When using this form of the GROUPS subcommand, missing values
in the independent variable are excluded on a listwise basis, regardless
of whether /MISSING=LISTWISE was specified.
Example
A researcher wishes to know whether within a population, adult males are
taller than adult females. The samples are drawn from the population
under investigation and recorded in the file physiology.sav.
As previously noted, one sample in the dataset contains a height value
which is clearly incorrect. So this is excluded from the analysis
using the SELECT command.
get file='physiology.sav'.
select if (height >= 200).
t-test /variables = height
/groups = sex(0,1).
The null hypothesis is that both males and females are on average of equal height.
From the output, shown below, one can clearly see that the sample mean height is greater for males than for females. However in order to see if this is a significant result, one must consult the T-Test table.
The T-Test table contains two rows; one for use if the variance of the samples in each group may be safely assumed to be equal, and the second row if the variances in each group may not be safely assumed to be equal.
In this case however, both rows show a 2-tailed significance less than 0.001 and one must therefore reject the null hypothesis and conclude that within the population the mean height of males and of females are unequal.
Group Statistics
┌────────────────────────────┬──┬───────┬──────────────┬─────────┐
│ Group │ N│ Mean │Std. Deviation│S.E. Mean│
├────────────────────────────┼──┼───────┼──────────────┼─────────┤
│Height in millimeters Male │22│1796.49│ 49.71│ 10.60│
│ Female│17│1610.77│ 25.43│ 6.17│
└────────────────────────────┴──┴───────┴──────────────┴─────────┘
Independent Samples Test
┌─────────────────────┬──────────┬──────────────────────────────────────────
│ │ Levene's │
│ │ Test for │
│ │ Equality │
│ │ of │
│ │ Variances│ T─Test for Equality of Means
│ ├────┬─────┼─────┬─────┬───────┬──────────┬──────────┐
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ │ │ Sig. │ │ │
│ │ │ │ │ │ (2─ │ Mean │Std. Error│
│ │ F │ Sig.│ t │ df │tailed)│Difference│Difference│
├─────────────────────┼────┼─────┼─────┼─────┼───────┼──────────┼──────────┤
│Height in Equal │ .97│ .331│14.02│37.00│ .000│ 185.72│ 13.24│
│millimeters variances│ │ │ │ │ │ │ │
│ assumed │ │ │ │ │ │ │ │
│ Equal │ │ │15.15│32.71│ .000│ 185.72│ 12.26│
│ variances│ │ │ │ │ │ │ │
│ not │ │ │ │ │ │ │ │
│ assumed │ │ │ │ │ │ │ │
└─────────────────────┴────┴─────┴─────┴─────┴───────┴──────────┴──────────┘
┌─────────────────────┬─────────────┐
│ │ │
│ │ │
│ │ │
│ │ │
│ │ │
│ ├─────────────┤
│ │ 95% │
│ │ Confidence │
│ │ Interval of │
│ │ the │
│ │ Difference │
│ ├──────┬──────┤
│ │ Lower│ Upper│
├─────────────────────┼──────┼──────┤
│Height in Equal │158.88│212.55│
│millimeters variances│ │ │
│ assumed │ │ │
│ Equal │160.76│210.67│
│ variances│ │ │
│ not │ │ │
│ assumed │ │ │
└─────────────────────┴──────┴──────┘
Paired Samples Mode
The PAIRS subcommand introduces Paired Samples mode. Use this mode
when repeated measures have been taken from the same samples. If the
WITH keyword is omitted, then tables for all combinations of variables
given in the PAIRS subcommand are generated. If the WITH keyword is
given, and the (PAIRED) keyword is also given, then the number of
variables preceding WITH must be the same as the number following it.
In this case, tables for each respective pair of variables are
generated. In the event that the WITH keyword is given, but the
(PAIRED) keyword is omitted, then tables for each combination of
variable preceding WITH against variable following WITH are
generated.
ONEWAY
ONEWAY
[/VARIABLES = ] VAR_LIST BY VAR
/MISSING={ANALYSIS,LISTWISE} {EXCLUDE,INCLUDE}
/CONTRAST= VALUE1 [, VALUE2] ... [,VALUEN]
/STATISTICS={DESCRIPTIVES,HOMOGENEITY}
/POSTHOC={BONFERRONI, GH, LSD, SCHEFFE, SIDAK, TUKEY, ALPHA ([VALUE])}
The ONEWAY procedure performs a one-way analysis of variance of
variables factored by a single independent variable. It is used to
compare the means of a population divided into more than two groups.
The dependent variables to be analysed should be given in the
VARIABLES subcommand. The list of variables must be followed by the
BY keyword and the name of the independent (or factor) variable.
You can use the STATISTICS subcommand to tell PSPP to display
ancillary information. The options accepted are:
DESCRIPTIVES: Displays descriptive statistics about the groups factored by the independent variable.HOMOGENEITY: Displays the Levene test of Homogeneity of Variance for the variables and their groups.
The CONTRAST subcommand is used when you anticipate certain
differences between the groups. The subcommand must be followed by a
list of numerals which are the coefficients of the groups to be tested.
The number of coefficients must correspond to the number of distinct
groups (or values of the independent variable). If the total sum of the
coefficients are not zero, then PSPP will display a warning, but will
proceed with the analysis. The CONTRAST subcommand may be given up to
10 times in order to specify different contrast tests. The MISSING
subcommand defines how missing values are handled. If LISTWISE is
specified then cases which have missing values for the independent
variable or any dependent variable are ignored. If ANALYSIS is
specified, then cases are ignored if the independent variable is missing
or if the dependent variable currently being analysed is missing. The
default is ANALYSIS. A setting of EXCLUDE means that variables
whose values are user-missing are to be excluded from the analysis. A
setting of INCLUDE means they are to be included. The default is
EXCLUDE.
Using the POSTHOC subcommand you can perform multiple pairwise
comparisons on the data. The following comparison methods are
available:
LSD: Least Significant Difference.TUKEY: Tukey Honestly Significant Difference.BONFERRONI: Bonferroni test.SCHEFFE: Scheffé's test.SIDAK: Sidak test.GH: The Games-Howell test.
Use the optional syntax ALPHA(VALUE) to indicate that ONEWAY should
perform the posthoc tests at a confidence level of VALUE. If
ALPHA(VALUE) is not specified, then the confidence level used is 0.05.
QUICK CLUSTER
QUICK CLUSTER VAR_LIST
[/CRITERIA=CLUSTERS(K) [MXITER(MAX_ITER)] CONVERGE(EPSILON) [NOINITIAL]]
[/MISSING={EXCLUDE,INCLUDE} {LISTWISE, PAIRWISE}]
[/PRINT={INITIAL} {CLUSTER}]
[/SAVE[=[CLUSTER[(MEMBERSHIP_VAR)]] [DISTANCE[(DISTANCE_VAR)]]]
The QUICK CLUSTER command performs k-means clustering on the
dataset. This is useful when you wish to allocate cases into clusters
of similar values and you already know the number of clusters.
The minimum specification is QUICK CLUSTER followed by the names of
the variables which contain the cluster data. Normally you will also
want to specify /CRITERIA=CLUSTERS(K) where K is the number of
clusters. If this is not specified, then K defaults to 2.
If you use /CRITERIA=NOINITIAL then a naive algorithm to select the
initial clusters is used. This will provide for faster execution but
less well separated initial clusters and hence possibly an inferior
final result.
QUICK CLUSTER uses an iterative algorithm to select the clusters
centers. The subcommand /CRITERIA=MXITER(MAX_ITER) sets the maximum
number of iterations. During classification, PSPP will continue
iterating until until MAX_ITER iterations have been done or the
convergence criterion (see below) is fulfilled. The default value of
MAX_ITER is 2.
If however, you specify /CRITERIA=NOUPDATE then after selecting the
initial centers, no further update to the cluster centers is done. In
this case, MAX_ITER, if specified, is ignored.
The subcommand /CRITERIA=CONVERGE(EPSILON) is used to set the
convergence criterion. The value of convergence criterion is
EPSILON times the minimum distance between the initial cluster
centers. Iteration stops when the mean cluster distance between one
iteration and the next is less than the convergence criterion. The
default value of EPSILON is zero.
The MISSING subcommand determines the handling of missing
variables. If INCLUDE is set, then user-missing values are considered
at their face value and not as missing values. If EXCLUDE is set,
which is the default, user-missing values are excluded as well as
system-missing values.
If LISTWISE is set, then the entire case is excluded from the
analysis whenever any of the clustering variables contains a missing
value. If PAIRWISE is set, then a case is considered missing only if
all the clustering variables contain missing values. Otherwise it is
clustered on the basis of the non-missing values. The default is
LISTWISE.
The PRINT subcommand requests additional output to be printed. If
INITIAL is set, then the initial cluster memberships will be printed.
If CLUSTER is set, the cluster memberships of the individual cases are
displayed (potentially generating lengthy output).
You can specify the subcommand SAVE to ask that each case's cluster
membership and the euclidean distance between the case and its cluster
center be saved to a new variable in the active dataset. To save the
cluster membership use the CLUSTER keyword and to save the distance
use the DISTANCE keyword. Each keyword may optionally be followed by
a variable name in parentheses to specify the new variable which is to
contain the saved parameter. If no variable name is specified, then
PSPP will create one.
RANK
RANK
[VARIABLES=] VAR_LIST [{A,D}] [BY VAR_LIST]
/TIES={MEAN,LOW,HIGH,CONDENSE}
/FRACTION={BLOM,TUKEY,VW,RANKIT}
/PRINT[={YES,NO}
/MISSING={EXCLUDE,INCLUDE}
/RANK [INTO VAR_LIST]
/NTILES(k) [INTO VAR_LIST]
/NORMAL [INTO VAR_LIST]
/PERCENT [INTO VAR_LIST]
/RFRACTION [INTO VAR_LIST]
/PROPORTION [INTO VAR_LIST]
/N [INTO VAR_LIST]
/SAVAGE [INTO VAR_LIST]
The RANK command ranks variables and stores the results into new
variables.
The VARIABLES subcommand, which is mandatory, specifies one or more
variables whose values are to be ranked. After each variable, A or
D may appear, indicating that the variable is to be ranked in
ascending or descending order. Ascending is the default. If a BY
keyword appears, it should be followed by a list of variables which are
to serve as group variables. In this case, the cases are gathered into
groups, and ranks calculated for each group.
The TIES subcommand specifies how tied values are to be treated.
The default is to take the mean value of all the tied cases.
The FRACTION subcommand specifies how proportional ranks are to be
calculated. This only has any effect if NORMAL or PROPORTIONAL rank
functions are requested.
The PRINT subcommand may be used to specify that a summary of the
rank variables created should appear in the output.
The function subcommands are RANK, NTILES, NORMAL, PERCENT,
RFRACTION, PROPORTION, and SAVAGE. Any number of function
subcommands may appear. If none are given, then the default is RANK.
The NTILES subcommand must take an integer specifying the number of
partitions into which values should be ranked. Each subcommand may be
followed by the INTO keyword and a list of variables which are the
variables to be created and receive the rank scores. There may be as
many variables specified as there are variables named on the
VARIABLES subcommand. If fewer are specified, then the variable
names are automatically created.
The MISSING subcommand determines how user missing values are to be
treated. A setting of EXCLUDE means that variables whose values are
user-missing are to be excluded from the rank scores. A setting of
INCLUDE means they are to be included. The default is EXCLUDE.
REGRESSION
The REGRESSION procedure fits linear models to data via least-squares
estimation. The procedure is appropriate for data which satisfy those
assumptions typical in linear regression:
-
The data set contains \(n\) observations of a dependent variable, say \(y_1,...,y_n\), and \(n\) observations of one or more explanatory variables. Let \(x_{11}, x_{12}, ..., x_{1n}\) denote the \(n\) observations of the first explanatory variable; \(x_{21},...,x_{2n}\) denote the \(n\) observations of the second explanatory variable; \(x_{k1},...,x_{kn}\) denote the \(n\) observations of the kth explanatory variable.
-
The dependent variable \(y\) has the following relationship to the explanatory variables: \(y_i = b_0 + b_1 x_{1i} + ... + b_k x_{ki} + z_i\) where \(b_0, b_1, ..., b_k\) are unknown coefficients, and \(z_1,...,z_n\) are independent, normally distributed "noise" terms with mean zero and common variance. The noise, or "error" terms are unobserved. This relationship is called the "linear model".
The
REGRESSIONprocedure estimates the coefficients \(b_0,...,b_k\) and produces output relevant to inferences for the linear model.
Syntax
REGRESSION
/VARIABLES=VAR_LIST
/DEPENDENT=VAR_LIST
/STATISTICS={ALL, DEFAULTS, R, COEFF, ANOVA, BCOV, CI[CONF, TOL]}
{ /ORIGIN | /NOORIGIN }
/SAVE={PRED, RESID}
The REGRESSION procedure reads the active dataset and outputs
statistics relevant to the linear model specified by the user.
The VARIABLES subcommand, which is required, specifies the list of
variables to be analyzed. Keyword VARIABLES is required. The
DEPENDENT subcommand specifies the dependent variable of the linear
model. The DEPENDENT subcommand is required. All variables listed
in the VARIABLES subcommand, but not listed in the DEPENDENT
subcommand, are treated as explanatory variables in the linear model.
All other subcommands are optional:
The STATISTICS subcommand specifies which statistics are to be
displayed. The following keywords are accepted:
ALL
All of the statistics below.R
The ratio of the sums of squares due to the model to the total sums of squares for the dependent variable.COEFF
A table containing the estimated model coefficients and their standard errors.CI (CONF)
This item is only relevant ifCOEFFhas also been selected. It specifies that the confidence interval for the coefficients should be printed. The optional valueCONF, which must be in parentheses, is the desired confidence level expressed as a percentage.ANOVA
Analysis of variance table for the model.BCOV
The covariance matrix for the estimated model coefficients.TOL
The variance inflation factor and its reciprocal. This has no effect unlessCOEFFis also given.DEFAULT
The same as ifR,COEFF, andANOVAhad been selected. This is what you get if the/STATISTICScommand is not specified, or if it is specified without any parameters.
The ORIGIN and NOORIGIN subcommands are mutually exclusive.
ORIGIN indicates that the regression should be performed through the
origin. You should use this option if, and only if you have reason to
believe that the regression does indeed pass through the origin -- that
is to say, the value b_0 above, is zero. The default is NOORIGIN.
The SAVE subcommand causes PSPP to save the residuals or predicted
values from the fitted model to the active dataset. PSPP will store the
residuals in a variable called RES1 if no such variable exists, RES2
if RES1 already exists, RES3 if RES1 and RES2 already exist,
etc. It will choose the name of the variable for the predicted values
similarly, but with PRED as a prefix. When SAVE is used, PSPP
ignores TEMPORARY, treating temporary transformations as permanent.
Example
The following PSPP syntax will generate the default output and save the predicted values and residuals to the active dataset.
title 'Demonstrate REGRESSION procedure'.
data list / v0 1-2 (A) v1 v2 3-22 (10).
begin data.
b 7.735648 -23.97588
b 6.142625 -19.63854
a 7.651430 -25.26557
c 6.125125 -16.57090
a 8.245789 -25.80001
c 6.031540 -17.56743
a 9.832291 -28.35977
c 5.343832 -16.79548
a 8.838262 -29.25689
b 6.200189 -18.58219
end data.
list.
regression /variables=v0 v1 v2 /statistics defaults /dependent=v2
/save pred resid /method=enter.
RELIABILITY
RELIABILITY
/VARIABLES=VAR_LIST
/SCALE (NAME) = {VAR_LIST, ALL}
/MODEL={ALPHA, SPLIT[(N)]}
/SUMMARY={TOTAL,ALL}
/MISSING={EXCLUDE,INCLUDE}
The RELIABILITY command performs reliability analysis on the data.
The VARIABLES subcommand is required. It determines the set of
variables upon which analysis is to be performed.
The SCALE subcommand determines the variables for which reliability
is to be calculated. If SCALE is omitted, then analysis for all
variables named in the VARIABLES subcommand are used. Optionally, the
NAME parameter may be specified to set a string name for the scale.
The MODEL subcommand determines the type of analysis. If ALPHA
is specified, then Cronbach's Alpha is calculated for the scale. If the
model is SPLIT, then the variables are divided into 2 subsets. An
optional parameter N may be given, to specify how many variables to be
in the first subset. If N is omitted, then it defaults to one half of
the variables in the scale, or one half minus one if there are an odd
number of variables. The default model is ALPHA.
By default, any cases with user missing, or system missing values for
any variables given in the VARIABLES subcommand are omitted from the
analysis. The MISSING subcommand determines whether user missing
values are included or excluded in the analysis.
The SUMMARY subcommand determines the type of summary analysis to
be performed. Currently there is only one type: SUMMARY=TOTAL, which
displays per-item analysis tested against the totals.
Example
Before analysing the results of a survey—particularly for a multiple choice survey—it is desirable to know whether the respondents have considered their answers or simply provided random answers.
In the following example the survey results from the file hotel.sav
are used. All five survey questions are included in the reliability
analysis. However, before running the analysis, the data must be
preprocessed. An examination of the survey questions reveals that two
questions, viz: v3 and v5 are negatively worded, whereas the others
are positively worded. All questions must be based upon the same
scale for the analysis to be meaningful. One could use the
RECODE command, however a simpler way is to use
COMPUTE and this is what is done in the syntax below.
get file="hotel.sav".
* Recode V3 and V5 inverting the sense of the values.
compute v3 = 6 - v3.
compute v5 = 6 - v5.
reliability
/variables= all
/model=alpha.
In this case, all variables in the data set are used, so we can use
the special keyword ALL.
The output, below, shows that Cronbach's Alpha is 0.11 which is a value normally considered too low to indicate consistency within the data. This is possibly due to the small number of survey questions. The survey should be redesigned before serious use of the results are applied.
Scale: ANY
Case Processing Summary
┌────────┬──┬───────┐
│Cases │ N│Percent│
├────────┼──┼───────┤
│Valid │17│ 100.0%│
│Excluded│ 0│ .0%│
│Total │17│ 100.0%│
└────────┴──┴───────┘
Reliability Statistics
┌────────────────┬──────────┐
│Cronbach's Alpha│N of Items│
├────────────────┼──────────┤
│ .11│ 5│
└────────────────┴──────────┘
ROC
ROC
VAR_LIST BY STATE_VAR (STATE_VALUE)
/PLOT = { CURVE [(REFERENCE)], NONE }
/PRINT = [ SE ] [ COORDINATES ]
/CRITERIA = [ CUTOFF({INCLUDE,EXCLUDE}) ]
[ TESTPOS ({LARGE,SMALL}) ]
[ CI (CONFIDENCE) ]
[ DISTRIBUTION ({FREE, NEGEXPO }) ]
/MISSING={EXCLUDE,INCLUDE}
The ROC command is used to plot the receiver operating
characteristic curve of a dataset, and to estimate the area under the
curve. This is useful for analysing the efficacy of a variable as a
predictor of a state of nature.
The mandatory VAR_LIST is the list of predictor variables. The
variable STATE_VAR is the variable whose values represent the actual
states, and STATE_VALUE is the value of this variable which represents
the positive state.
The optional subcommand PLOT is used to determine if and how the
ROC curve is drawn. The keyword CURVE means that the ROC curve
should be drawn, and the optional keyword REFERENCE, which should be
enclosed in parentheses, says that the diagonal reference line should be
drawn. If the keyword NONE is given, then no ROC curve is drawn.
By default, the curve is drawn with no reference line.
The optional subcommand PRINT determines which additional tables
should be printed. Two additional tables are available. The SE
keyword says that standard error of the area under the curve should be
printed as well as the area itself. In addition, a p-value for the null
hypothesis that the area under the curve equals 0.5 is printed. The
COORDINATES keyword says that a table of coordinates of the ROC
curve should be printed.
The CRITERIA subcommand has four optional parameters:
-
The
TESTPOSparameter may beLARGEorSMALL.LARGEis the default, and says that larger values in the predictor variables are to be considered positive.SMALLindicates that smaller values should be considered positive. -
The
CIparameter specifies the confidence interval that should be printed. It has no effect if theSEkeyword in thePRINTsubcommand has not been given. -
The
DISTRIBUTIONparameter determines the method to be used when estimating the area under the curve. There are two possibilities, viz:FREEandNEGEXPO. TheFREEmethod uses a non-parametric estimate, and theNEGEXPOmethod a bi-negative exponential distribution estimate. TheNEGEXPOmethod should only be used when the number of positive actual states is equal to the number of negative actual states. The default isFREE. -
The
CUTOFFparameter is for compatibility and is ignored.
The MISSING subcommand determines whether user missing values are to
be included or excluded in the analysis. The default behaviour is to
exclude them. Cases are excluded on a listwise basis; if any of the
variables in VAR_LIST or if the variable STATE_VAR is missing,
then the entire case is excluded.
Matrices
Some PSPP procedures work with matrices by producing numeric matrices that report results of data analysis, or by consuming matrices as a basis for further analysis. This chapter documents the format of data files that store these matrices and commands for working with them, as well as PSPP's general-purpose facility for matrix operations.
Matrix Files
A matrix file is an SPSS system file that conforms to the dictionary and case structure described in this section. Procedures that read matrices from files expect them to be in the matrix file format. Procedures that write matrices also use this format.
Text files that contain matrices can be converted to matrix file format. The MATRIX DATA command can read a text file as a matrix file.
A matrix file's dictionary must have the following variables in the specified order:
-
Zero or more numeric split variables. These are included by procedures when
SPLIT FILEis active.MATRIX DATAassigns split variables formatF4.0. -
ROWTYPE_, a string variable with width 8. This variable indicates the kind of matrix or vector that a given case represents. The supported row types are listed below. -
Zero or more numeric factor variables. These are included by procedures that divide data into cells. For within-cell data, factor variables are filled with non-missing values; for pooled data, they are missing.
MATRIX DATAassigns factor variables formatF4.0. -
VARNAME_, a string variable. Matrix data includes one row per continuous variable (see below), naming each continuous variable in order. This column is blank for vector data.MATRIX DATAmakesVARNAME_wide enough for the name of any of the continuous variables, but at least 8 bytes. -
One or more numeric continuous variables. These are the variables whose data was analyzed to produce the matrices.
MATRIX DATAassigns continuous variables formatF10.4.
Case weights are ignored in matrix files.
Row Types
Matrix files support a fixed set of types of matrix and vector data.
The ROWTYPE_ variable in each case of a matrix file indicates its row
type.
The supported matrix row types are listed below. Each type is listed
with the keyword that identifies it in ROWTYPE_. All supported types
of matrices are square, meaning that each matrix must include one row
per continuous variable, with the VARNAME_ variable indicating each
continuous variable in turn in the same order as the dictionary.
-
CORR
Correlation coefficients. -
COV
Covariance coefficients. -
MAT
General-purpose matrix. -
N_MATRIX
Counts. -
PROX
Proximities matrix.
The supported vector row types are listed below, along with their
associated keyword. Vector row types only require a single row, whose
VARNAME_ is blank:
-
COUNT
Unweighted counts. -
DFE
Degrees of freedom. -
MEAN
Means. -
MSE
Mean squared errors. -
N
Counts. -
STDDEV
Standard deviations.
Only the row types listed above may appear in matrix files. The
MATRIX DATA command, however, accepts the additional row types
listed below, which it changes into matrix file row types as part of
its conversion process:
-
N_VECTOR
Synonym forN. -
SD
Synonym forSTDDEV. -
N_SCALAR
Accepts a single number from theMATRIX DATAinput and writes it as anNrow with the number replicated across all the continuous variables.
MATRIX DATA
MATRIX DATA
VARIABLES=VARIABLES
[FILE={'FILE_NAME' | INLINE}
[/FORMAT=[{LIST | FREE}]
[{UPPER | LOWER | FULL}]
[{DIAGONAL | NODIAGONAL}]]
[/SPLIT=SPLIT_VARS]
[/FACTORS=FACTOR_VARS]
[/N=N]
The following subcommands are only needed when ROWTYPE_ is not
specified on the VARIABLES subcommand:
[/CONTENTS={CORR,COUNT,COV,DFE,MAT,MEAN,MSE,
N_MATRIX,N|N_VECTOR,N_SCALAR,PROX,SD|STDDEV}]
[/CELLS=N_CELLS]
The MATRIX DATA command convert matrices and vectors from text
format into the matrix file format for use
by procedures that read matrices. It reads a text file or inline data
and outputs to the active file, replacing any data already in the
active dataset. The matrix file may then be used by other commands
directly from the active file, or it may be written to a .sav file
using the SAVE command.
The text data read by MATRIX DATA can be delimited by spaces or
commas. A plus or minus sign, except immediately following a d or
e, also begins a new value. Optionally, values may be enclosed in
single or double quotes.
MATRIX DATA can read the types of matrix and vector data supported
in matrix files (see Row Types).
The FILE subcommand specifies the source of the command's input. To
read input from a text file, specify its name in quotes. To supply
input inline, omit FILE or specify INLINE. Inline data must
directly follow MATRIX DATA, inside BEGIN DATA.
VARIABLES is the only required subcommand. It names the variables
present in each input record in the order that they appear. (MATRIX DATA reorders the variables in the matrix file it produces, if needed
to fit the matrix file format.) The variable list must include split
variables and factor variables, if they are present in the data, in
addition to the continuous variables that form matrix rows and columns.
It may also include a special variable named ROWTYPE_.
Matrix data may include split variables or factor variables or both.
List split variables, if any, on the SPLIT subcommand and factor
variables, if any, on the FACTORS subcommand. Split and factor
variables must be numeric. Split and factor variables must also be
listed on VARIABLES, with one exception: if VARIABLES does not
include ROWTYPE_, then SPLIT may name a single variable that is not
in VARIABLES (see Example 8).
The FORMAT subcommand accepts settings to describe the format of
the input data:
-
LIST(default)
FREELISTrequires each row to begin at the start of a new input line.FREEallows rows to begin in the middle of a line. Either setting allows a single row to continue across multiple input lines. -
LOWER(default)
UPPER
FULLWith
LOWER, only the lower triangle is read from the input data and the upper triangle is mirrored across the main diagonal.UPPERbehaves similarly for the upper triangle.FULLreads the entire matrix. -
DIAGONAL(default)
NODIAGONALWith
DIAGONAL, the main diagonal is read from the input data. WithNODIAGONAL, which is incompatible withFULL, the main diagonal is not read from the input data but instead set to 1 for correlation matrices and system-missing for others.
The N subcommand is a way to specify the size of the population.
It is equivalent to specifying an N vector with the specified value
for each split file.
MATRIX DATA supports two different ways to indicate the kinds of
matrices and vectors present in the data, depending on whether a
variable with the special name ROWTYPE_ is present in VARIABLES.
The following subsections explain MATRIX DATA syntax and behavior in
each case.
- With
ROWTYPE_ - Without
ROWTYPE_
With ROWTYPE_
If VARIABLES includes ROWTYPE_, each case's ROWTYPE_ indicates
the type of data contained in the row. See Row
Types for a list of supported row types.
Example 1: Defaults with ROWTYPE_
This example shows a simple use of MATRIX DATA with ROWTYPE_ plus 8
variables named var01 through var08.
Because ROWTYPE_ is the first variable in VARIABLES, it appears
first on each line. The first three lines in the example data have
ROWTYPE_ values of MEAN, SD, and N. These indicate that these
lines contain vectors of means, standard deviations, and counts,
respectively, for var01 through var08 in order.
The remaining 8 lines have a ROWTYPE_ of CORR which indicates that
the values are correlation coefficients. Each of the lines corresponds
to a row in the correlation matrix: the first line is for var01, the
next line for var02, and so on. The input only contains values for
the lower triangle, including the diagonal, since FORMAT=LOWER DIAGONAL is the default.
With ROWTYPE_, the CONTENTS subcommand is optional and the
CELLS subcommand may not be used.
MATRIX DATA
VARIABLES=ROWTYPE_ var01 TO var08.
BEGIN DATA.
MEAN 24.3 5.4 69.7 20.1 13.4 2.7 27.9 3.7
SD 5.7 1.5 23.5 5.8 2.8 4.5 5.4 1.5
N 92 92 92 92 92 92 92 92
CORR 1.00
CORR .18 1.00
CORR -.22 -.17 1.00
CORR .36 .31 -.14 1.00
CORR .27 .16 -.12 .22 1.00
CORR .33 .15 -.17 .24 .21 1.00
CORR .50 .29 -.20 .32 .12 .38 1.00
CORR .17 .29 -.05 .20 .27 .20 .04 1.00
END DATA.
Example 2: FORMAT=UPPER NODIAGONAL
This syntax produces the same matrix file as example 1, but it uses
FORMAT=UPPER NODIAGONAL to specify the upper triangle and omit the
diagonal. Because the matrix's ROWTYPE_ is CORR, PSPP automatically
fills in the diagonal with 1.
MATRIX DATA
VARIABLES=ROWTYPE_ var01 TO var08
/FORMAT=UPPER NODIAGONAL.
BEGIN DATA.
MEAN 24.3 5.4 69.7 20.1 13.4 2.7 27.9 3.7
SD 5.7 1.5 23.5 5.8 2.8 4.5 5.4 1.5
N 92 92 92 92 92 92 92 92
CORR .17 .50 -.33 .27 .36 -.22 .18
CORR .29 .29 -.20 .32 .12 .38
CORR .05 .20 -.15 .16 .21
CORR .20 .32 -.17 .12
CORR .27 .12 -.24
CORR -.20 -.38
CORR .04
END DATA.
Example 3: N subcommand
This syntax uses the N subcommand in place of an N vector. It
produces the same matrix file as examples 1 and 2.
MATRIX DATA
VARIABLES=ROWTYPE_ var01 TO var08
/FORMAT=UPPER NODIAGONAL
/N 92.
BEGIN DATA.
MEAN 24.3 5.4 69.7 20.1 13.4 2.7 27.9 3.7
SD 5.7 1.5 23.5 5.8 2.8 4.5 5.4 1.5
CORR .17 .50 -.33 .27 .36 -.22 .18
CORR .29 .29 -.20 .32 .12 .38
CORR .05 .20 -.15 .16 .21
CORR .20 .32 -.17 .12
CORR .27 .12 -.24
CORR -.20 -.38
CORR .04
END DATA.
Example 4: Split variables
This syntax defines two matrices, using the variable s1 to distinguish
between them. Notice how the order of variables in the input matches
their order on VARIABLES. This example also uses FORMAT=FULL.
MATRIX DATA
VARIABLES=s1 ROWTYPE_ var01 TO var04
/SPLIT=s1
/FORMAT=FULL.
BEGIN DATA.
0 MEAN 34 35 36 37
0 SD 22 11 55 66
0 N 99 98 99 92
0 CORR 1 .9 .8 .7
0 CORR .9 1 .6 .5
0 CORR .8 .6 1 .4
0 CORR .7 .5 .4 1
1 MEAN 44 45 34 39
1 SD 23 15 51 46
1 N 98 34 87 23
1 CORR 1 .2 .3 .4
1 CORR .2 1 .5 .6
1 CORR .3 .5 1 .7
1 CORR .4 .6 .7 1
END DATA.
Example 5: Factor variables
This syntax defines a matrix file that includes a factor variable f1.
The data includes mean, standard deviation, and count vectors for two
values of the factor variable, plus a correlation matrix for pooled
data.
MATRIX DATA
VARIABLES=ROWTYPE_ f1 var01 TO var04
/FACTOR=f1.
BEGIN DATA.
MEAN 0 34 35 36 37
SD 0 22 11 55 66
N 0 99 98 99 92
MEAN 1 44 45 34 39
SD 1 23 15 51 46
N 1 98 34 87 23
CORR . 1
CORR . .9 1
CORR . .8 .6 1
CORR . .7 .5 .4 1
END DATA.
Without ROWTYPE_
If VARIABLES does not contain ROWTYPE_, the CONTENTS subcommand
defines the row types that appear in the file and their order. If
CONTENTS is omitted, CONTENTS=CORR is assumed.
Factor variables without ROWTYPE_ introduce special requirements,
illustrated below in Examples 8 and 9.
Example 6: Defaults without ROWTYPE_
This example shows a simple use of MATRIX DATA with 8 variables named
var01 through var08, without ROWTYPE_. This yields the same
matrix file as Example 1.
MATRIX DATA
VARIABLES=var01 TO var08
/CONTENTS=MEAN SD N CORR.
BEGIN DATA.
24.3 5.4 69.7 20.1 13.4 2.7 27.9 3.7
5.7 1.5 23.5 5.8 2.8 4.5 5.4 1.5
92 92 92 92 92 92 92 92
1.00
.18 1.00
-.22 -.17 1.00
.36 .31 -.14 1.00
.27 .16 -.12 .22 1.00
.33 .15 -.17 .24 .21 1.00
.50 .29 -.20 .32 .12 .38 1.00
.17 .29 -.05 .20 .27 .20 .04 1.00
END DATA.
Example 7: Split variables with explicit values
This syntax defines two matrices, using the variable s1 to distinguish
between them. Each line of data begins with s1. This yields the same
matrix file as Example 4.
MATRIX DATA
VARIABLES=s1 var01 TO var04
/SPLIT=s1
/FORMAT=FULL
/CONTENTS=MEAN SD N CORR.
BEGIN DATA.
0 34 35 36 37
0 22 11 55 66
0 99 98 99 92
0 1 .9 .8 .7
0 .9 1 .6 .5
0 .8 .6 1 .4
0 .7 .5 .4 1
1 44 45 34 39
1 23 15 51 46
1 98 34 87 23
1 1 .2 .3 .4
1 .2 1 .5 .6
1 .3 .5 1 .7
1 .4 .6 .7 1
END DATA.
Example 8: Split variable with sequential values
Like this previous example, this syntax defines two matrices with split
variable s1. In this case, though, s1 is not listed in VARIABLES,
which means that its value does not appear in the data. Instead,
MATRIX DATA reads matrix data until the input is exhausted, supplying
1 for the first split, 2 for the second, and so on.
MATRIX DATA
VARIABLES=var01 TO var04
/SPLIT=s1
/FORMAT=FULL
/CONTENTS=MEAN SD N CORR.
BEGIN DATA.
34 35 36 37
22 11 55 66
99 98 99 92
1 .9 .8 .7
.9 1 .6 .5
.8 .6 1 .4
.7 .5 .4 1
44 45 34 39
23 15 51 46
98 34 87 23
1 .2 .3 .4
.2 1 .5 .6
.3 .5 1 .7
.4 .6 .7 1
END DATA.
Factor variables without ROWTYPE_
Without ROWTYPE_, factor variables introduce two new wrinkles to
MATRIX DATA syntax. First, the CELLS subcommand must declare the
number of combinations of factor variables present in the data. If
there is, for example, one factor variable for which the data contains
three values, one would write CELLS=3; if there are two (or more)
factor variables for which the data contains five combinations, one
would use CELLS=5; and so on.
Second, the CONTENTS subcommand must distinguish within-cell data
from pooled data by enclosing within-cell row types in parentheses.
When different within-cell row types for a single factor appear in
subsequent lines, enclose the row types in a single set of parentheses;
when different factors' values for a given within-cell row type appear
in subsequent lines, enclose each row type in individual parentheses.
Without ROWTYPE_, input lines for pooled data do not include factor
values, not even as missing values, but input lines for within-cell data
do.
The following examples aim to clarify this syntax.
Example 9: Factor variables, grouping within-cell records by factor
This syntax defines the same matrix file as Example
5, without using ROWTYPE_. It
declares CELLS=2 because the data contains two values (0 and 1) for
factor variable f1. Within-cell vector row types MEAN, SD, and
N are in a single set of parentheses on CONTENTS because they are
grouped together in subsequent lines for a single factor value. The
data lines with the pooled correlation matrix do not have any factor
values.
MATRIX DATA
VARIABLES=f1 var01 TO var04
/FACTOR=f1
/CELLS=2
/CONTENTS=(MEAN SD N) CORR.
BEGIN DATA.
0 34 35 36 37
0 22 11 55 66
0 99 98 99 92
1 44 45 34 39
1 23 15 51 46
1 98 34 87 23
1
.9 1
.8 .6 1
.7 .5 .4 1
END DATA.
Example 10: Factor variables, grouping within-cell records by row type
This syntax defines the same matrix file as the previous example. The only difference is that the within-cell vector rows are grouped differently: two rows of means (one for each factor), followed by two rows of standard deviations, followed by two rows of counts.
MATRIX DATA
VARIABLES=f1 var01 TO var04
/FACTOR=f1
/CELLS=2
/CONTENTS=(MEAN) (SD) (N) CORR.
BEGIN DATA.
0 34 35 36 37
1 44 45 34 39
0 22 11 55 66
1 23 15 51 46
0 99 98 99 92
1 98 34 87 23
1
.9 1
.8 .6 1
.7 .5 .4 1
END DATA.
MCONVERT
MCONVERT
[[MATRIX=]
[IN({‘*’|'FILE'})]
[OUT({‘*’|'FILE'})]]
[/{REPLACE,APPEND}].
The MCONVERT command converts matrix data from a correlation matrix
and a vector of standard deviations into a covariance matrix, or vice
versa.
By default, MCONVERT both reads and writes the active file. Use
the MATRIX subcommand to specify other files. To read a matrix file,
specify its name inside parentheses following IN. To write a matrix
file, specify its name inside parentheses following OUT. Use * to
explicitly specify the active file for input or output.
When MCONVERT reads the input, by default it substitutes a
correlation matrix and a vector of standard deviations each time it
encounters a covariance matrix, and vice versa. Specify /APPEND to
instead have MCONVERT add the other form of data without removing the
existing data. Use /REPLACE to explicitly request removing the
existing data.
The MCONVERT command requires its input to be a matrix file. Use
MATRIX DATA to convert text input into matrix file
format.
MATRIX…END MATRIX
- Summary
- Matrix Expressions
- Matrix Functions
COMPUTECommandCALLCommandPRINTCommandDO IFCommandLOOPandBREAKCommandsREADandWRITECommandsGETCommandSAVECommandMGETCommandMSAVECommandDISPLAYCommandRELEASECommand
Summary
MATRIX.
…matrix commands…
END MATRIX.
The following basic matrix commands are supported:
COMPUTE variable[(index[,index])]=expression.
CALL procedure(argument, …).
PRINT [expression]
[/FORMAT=format]
[/TITLE=title]
[/SPACE={NEWPAGE | n}]
[{/RLABELS=string… | /RNAMES=expression}]
[{/CLABELS=string… | /CNAMES=expression}].
The following matrix commands offer support for flow control:
DO IF expression.
…matrix commands…
[ELSE IF expression.
…matrix commands…]…
[ELSE
…matrix commands…]
END IF.
LOOP [var=first TO last [BY step]] [IF expression].
…matrix commands…
END LOOP [IF expression].
BREAK.
The following matrix commands support matrix input and output:
READ variable[(index[,index])]
[/FILE=file]
/FIELD=first TO last [BY width]
[/FORMAT=format]
[/SIZE=expression]
[/MODE={RECTANGULAR | SYMMETRIC}]
[/REREAD].
WRITE expression
[/OUTFILE=file]
/FIELD=first TO last [BY width]
[/MODE={RECTANGULAR | TRIANGULAR}]
[/HOLD]
[/FORMAT=format].
GET variable[(index[,index])]
[/FILE={file | *}]
[/VARIABLES=variable…]
[/NAMES=expression]
[/MISSING={ACCEPT | OMIT | number}]
[/SYSMIS={OMIT | number}].
SAVE expression
[/OUTFILE={file | *}]
[/VARIABLES=variable…]
[/NAMES=expression]
[/STRINGS=variable…].
MGET [/FILE=file]
[/TYPE={COV | CORR | MEAN | STDDEV | N | COUNT}].
MSAVE expression
/TYPE={COV | CORR | MEAN | STDDEV | N | COUNT}
[/OUTFILE=file]
[/VARIABLES=variable…]
[/SNAMES=variable…]
[/SPLIT=expression]
[/FNAMES=variable…]
[/FACTOR=expression].
The following matrix commands provide additional support:
DISPLAY [{DICTIONARY | STATUS}].
RELEASE variable….
MATRIX and END MATRIX enclose a special PSPP sub-language, called
the matrix language. The matrix language does not require an active
dataset to be defined and only a few of the matrix language commands
work with any datasets that are defined. Each instance of
MATRIX…END MATRIX is a separate program whose state is independent
of any instance, so that variables declared within a matrix program are
forgotten at its end.
The matrix language works with matrices, where a "matrix" is a
rectangular array of real numbers. An N×M matrix has N rows and
M columns. Some special cases are important: a N×1 matrix is a
"column vector", a 1×N is a "row vector", and a 1×1 matrix is a
"scalar".
The matrix language also has limited support for matrices that
contain 8-byte strings instead of numbers. Strings longer than 8 bytes
are truncated, and shorter strings are padded with spaces. String
matrices are mainly useful for labeling rows and columns when printing
numerical matrices with the MATRIX PRINT command. Arithmetic
operations on string matrices will not produce useful results. The user
should not mix strings and numbers within a matrix.
The matrix language does not work with cases. A variable in the matrix language represents a single matrix.
The matrix language does not support missing values.
MATRIX is a procedure, so it cannot be enclosed inside DO IF,
LOOP, etc.
Macros defined before a matrix program may be used within a matrix
program, and macros may expand to include entire matrix programs. The
DEFINE command to define new
macros may not appear within a matrix program.
The following sections describe the details of the matrix language:
first, the syntax of matrix expressions, then each of the supported
commands. The COMMENT command is also supported.
Matrix Expressions
Many matrix commands use expressions. A matrix expression may use the following operators, listed in descending order of operator precedence. Within a single level, operators associate from left to right.
-
Matrix
*and elementwise&*multiplication; elementwise division/and&/.
The operators are described in more detail below. Matrix Functions documents matrix functions.
Expressions appear in the matrix language in some contexts where there
would be ambiguity whether / is an operator or a separator between
subcommands. In these contexts, only the operators with higher
precedence than / are allowed outside parentheses. Later sections
call these "restricted expressions".
Matrix Construction Operator {}
Use the {} operator to construct matrices. Within the curly braces,
commas separate elements within a row and semicolons separate rows. The
following examples show a 2×3 matrix, a 1×4 row vector, a 3×1 column
vector, and a scalar.
{1, 2, 3; 4, 5, 6} ⇒ [1 2 3]
[4 5 6]
{3.14, 6.28, 9.24, 12.57} ⇒ [3.14 6.28 9.42 12.57]
{1.41; 1.73; 2} ⇒ [1.41]
[1.73]
[2.00]
{5} ⇒ 5
Curly braces are not limited to holding numeric literals. They can
contain calculations, and they can paste together matrices and vectors
in any way as long as the result is rectangular. For example, if m is
matrix {1, 2; 3, 4}, r is row vector {5, 6}, and c is column
vector {7, 8}, then curly braces can be used as follows:
{m, c; r, 10} ⇒ [1 2 7]
[3 4 8]
[5 6 10]
{c, 2 * c, T(r)} ⇒ [7 14 5]
[8 16 6]
The final example above uses the transposition function T.
Integer Sequence Operator :
The syntax FIRST:LAST:STEP yields a row vector of consecutive integers
from FIRST to LAST counting by STEP. The final :STEP is optional and
defaults to 1 when omitted.
FIRST, LAST, and STEP must each be a scalar and should be an
integer (any fractional part is discarded). Because : has a high
precedence, operands other than numeric literals must usually be
parenthesized.
When STEP is positive (or omitted) and END < START, or if STEP
is negative and END > START, then the result is an empty matrix. If
STEP is 0, then PSPP reports an error.
Here are some examples:
1:6 ⇒ {1, 2, 3, 4, 5, 6}
1:6:2 ⇒ {1, 3, 5}
-1:-5:-1 ⇒ {-1, -2, -3, -4, -5}
-1:-5 ⇒ {}
2:1:0 ⇒ (error)
Index Operator ()
The result of the submatrix or indexing operator, written M(RINDEX, CINDEX), contains the rows of M whose indexes are given in vector
RINDEX and the columns whose indexes are given in vector CINDEX.
In the simplest case, if RINDEX and CINDEX are both scalars, the
result is also a scalar:
{10, 20; 30, 40}(1, 1) ⇒ 10
{10, 20; 30, 40}(1, 2) ⇒ 20
{10, 20; 30, 40}(2, 1) ⇒ 30
{10, 20; 30, 40}(2, 2) ⇒ 40
If the index arguments have multiple elements, then the result includes multiple rows or columns:
{10, 20; 30, 40}(1:2, 1) ⇒ {10; 30}
{10, 20; 30, 40}(2, 1:2) ⇒ {30, 40}
{10, 20; 30, 40}(1:2, 1:2) ⇒ {10, 20; 30, 40}
The special argument : may stand in for all the rows or columns in
the matrix being indexed, like this:
{10, 20; 30, 40}(:, 1) ⇒ {10; 30}
{10, 20; 30, 40}(2, :) ⇒ {30, 40}
{10, 20; 30, 40}(:, :) ⇒ {10, 20; 30, 40}
The index arguments do not have to be in order, and they may contain repeated values, like this:
{10, 20; 30, 40}({2, 1}, 1) ⇒ {30; 10}
{10, 20; 30, 40}(2, {2; 2; ⇒ {40, 40, 30}
1})
{10, 20; 30, 40}(2:1:-1, :) ⇒ {30, 40; 10, 20}
When the matrix being indexed is a row or column vector, only a single index argument is needed, like this:
{11, 12, 13, 14, 15}(2:4) ⇒ {12, 13, 14}
{11; 12; 13; 14; 15}(2:4) ⇒ {12; 13; 14}
When an index is not an integer, PSPP discards the fractional part. It is an error for an index to be less than 1 or greater than the number of rows or columns:
{11, 12, 13, 14}({2.5, ⇒ {12, 14}
4.6})
{11; 12; 13; 14}(0) ⇒ (error)
Unary Operators
The unary operators take a single operand of any dimensions and operate on each of its elements independently. The unary operators are:
-: Inverts the sign of each element.+: No change.NOT: Logical inversion: each positive value becomes 0 and each zero or negative value becomes 1.
Examples:
-{1, -2; 3, -4} ⇒ {-1, 2; -3, 4}
+{1, -2; 3, -4} ⇒ {1, -2; 3, -4}
NOT {1, 0; -1, 1} ⇒ {0, 1; 1, 0}
Elementwise Binary Operators
The elementwise binary operators require their operands to be matrices with the same dimensions. Alternatively, if one operand is a scalar, then its value is treated as if it were duplicated to the dimensions of the other operand. The result is a matrix of the same size as the operands, in which each element is the result of the applying the operator to the corresponding elements of the operands.
The elementwise binary operators are listed below.
-
The arithmetic operators, for familiar arithmetic operations:
-
+: Addition. -
-: Subtraction. -
*: Multiplication, if one operand is a scalar. (Otherwise this is matrix multiplication, described below.) -
/or&/: Division. -
&*: Multiplication. -
&**: Exponentiation.
-
-
The relational operators, whose results are 1 when a comparison is true and 0 when it is false:
-
<orLT: Less than. -
<=orLE: Less than or equal. -
=orEQ: Equal. -
>orGT: Greater than. -
>=orGE: Greater than or equal. -
<>or~=orNE: Not equal.
-
-
The logical operators, which treat positive operands as true and nonpositive operands as false. They yield 0 for false and 1 for true:
-
AND: True if both operands are true. -
OR: True if at least one operand is true. -
XOR: True if exactly one operand is true.
-
Examples:
1 + 2 ⇒ 3
1 + {3; 4} ⇒ {4; 5}
{66, 77; 88, 99} + 5 ⇒ {71, 82; 93, 104}
{4, 8; 3, 7} + {1, 0; 5, 2} ⇒ {5, 8; 8, 9}
{1, 2; 3, 4} < {4, 3; 2, 1} ⇒ {1, 1; 0, 0}
{1, 3; 2, 4} >= 3 ⇒ {0, 1; 0, 1}
{0, 0; 1, 1} AND {0, 1; 0, ⇒ {0, 0; 0, 1}
1}
Matrix Multiplication Operator *
If A is an M×N matrix and B is an N×P matrix, then A*B is the
M×P matrix multiplication product C. PSPP reports an error if the
number of columns in A differs from the number of rows in B.
The * operator performs elementwise multiplication (see above) if
one of its operands is a scalar.
No built-in operator yields the inverse of matrix multiplication.
Instead, multiply by the result of INV or GINV.
Some examples:
{1, 2, 3} * {4; 5; 6} ⇒ 32
{4; 5; 6} * {1, 2, 3} ⇒ {4, 8, 12;
5, 10, 15;
6, 12, 18}
Matrix Exponentiation Operator **
The result of A**B is defined as follows when A is a square matrix
and B is an integer scalar:
-
For
B > 0,A**BisA*…*A, where there areBAs. (PSPP implements this efficiently for largeB, using exponentiation by squaring.) -
For
B < 0,A**BisINV(A**(-B)). -
For
B = 0,A**Bis the identity matrix.
PSPP reports an error if A is not square or B is not an integer.
Examples:
{2, 5; 1, 4}**3 ⇒ {48, 165; 33, 114}
{2, 5; 1, 4}**0 ⇒ {1, 0; 0, 1}
10*{4, 7; 2, 6}**-1 ⇒ {6, -7; -2, 4}
Matrix Functions
The matrix language support numerous functions in multiple categories. The following subsections document each of the currently supported functions. The first letter of each parameter's name indicate the required argument type:
-
S: A scalar. -
N: A nonnegative integer scalar. (Non-integers are accepted and silently rounded down to the nearest integer.) -
V: A row or column vector. -
M: A matrix.
Elementwise Functions
These functions act on each element of their argument independently, like the elementwise operators.
-
ABS(M)
Takes the absolute value of each element of M.ABS({-1, 2; -3, 0}) ⇒ {1, 2; 3, 0} -
ARSIN(M)
ARTAN(M)
Computes the inverse sine or tangent, respectively, of each element in M. The results are in radians, between \(-\pi/2\) and \(+\pi/2\), inclusive.The value of \(\pi\) can be computed as
4*ARTAN(1).ARSIN({-1, 0, 1}) ⇒ {-1.57, 0, 1.57} (approximately) ARTAN({-5, -1, 1, 5}) ⇒ {-1.37, -.79, .79, 1.37} (approximately) -
COS(M)
SIN(M)
Computes the cosine or sine, respectively, of each element inM, which must be in radians.COS({0.785, 1.57; 3.14, 1.57 + 3.14}) ⇒ {.71, 0; -1, 0} (approximately) -
EXP(M)
Computes \(e^x\) for each element \(x\) inM.EXP({2, 3; 4, 5}) ⇒ {7.39, 20.09; 54.6, 148.4} (approximately) -
LG10(M)
LN(M)
Takes the logarithm with base 10 or base \(e\), respectively, of each element inM.LG10({1, 10, 100, 1000}) ⇒ {0, 1, 2, 3} LG10(0) ⇒ (error) LN({EXP(1), 1, 2, 3, 4}) ⇒ {1, 0, .69, 1.1, 1.39} (approximately) LN(0) ⇒ (error) -
MOD(M, S)
Takes each element inMmodulo nonzero scalar valueS, that is, the remainder of division byS. The sign of the result is the same as the sign of the dividend.MOD({5, 4, 3, 2, 1, 0}, 3) ⇒ {2, 1, 0, 2, 1, 0} MOD({5, 4, 3, 2, 1, 0}, -3) ⇒ {2, 1, 0, 2, 1, 0} MOD({-5, -4, -3, -2, -1, 0}, 3) ⇒ {-2, -1, 0, -2, -1, 0} MOD({-5, -4, -3, -2, -1, 0}, -3) ⇒ {-2, -1, 0, -2, -1, 0} MOD({5, 4, 3, 2, 1, 0}, 1.5) ⇒ {.5, 1.0, .0, .5, 1.0, .0} MOD({5, 4, 3, 2, 1, 0}, 0) ⇒ (error) -
RND(M)
TRUNC(M)
Rounds each element ofMto an integer.RNDrounds to the nearest integer, with halves rounded to even integers, andTRUNCrounds toward zero.RND({-1.6, -1.5, -1.4}) ⇒ {-2, -2, -1} RND({-.6, -.5, -.4}) ⇒ {-1, 0, 0} RND({.4, .5, .6} ⇒ {0, 0, 1} RND({1.4, 1.5, 1.6}) ⇒ {1, 2, 2} TRUNC({-1.6, -1.5, -1.4}) ⇒ {-1, -1, -1} TRUNC({-.6, -.5, -.4}) ⇒ {0, 0, 0} TRUNC({.4, .5, .6} ⇒ {0, 0, 0} TRUNC({1.4, 1.5, 1.6}) ⇒ {1, 1, 1} -
SQRT(M)
Takes the square root of each element ofM, which must not be negative.SQRT({0, 1, 2, 4, 9, 81}) ⇒ {0, 1, 1.41, 2, 3, 9} (approximately) SQRT(-1) ⇒ (error)
Logical Functions
-
ALL(M)
Returns a scalar with value 1 if all of the elements inMare nonzero, or 0 if at least one element is zero.ALL({1, 2, 3} < {2, 3, 4}) ⇒ 1 ALL({2, 2, 3} < {2, 3, 4}) ⇒ 0 ALL({2, 3, 3} < {2, 3, 4}) ⇒ 0 ALL({2, 3, 4} < {2, 3, 4}) ⇒ 0 -
ANY(M)
Returns a scalar with value 1 if any of the elements inMis nonzero, or 0 if all of them are zero.ANY({1, 2, 3} < {2, 3, 4}) ⇒ 1 ANY({2, 2, 3} < {2, 3, 4}) ⇒ 1 ANY({2, 3, 3} < {2, 3, 4}) ⇒ 1 ANY({2, 3, 4} < {2, 3, 4}) ⇒ 0
Matrix Construction Functions
-
BLOCK(M1, …, MN)
Returns a block diagonal matrix with as many rows as the sum of its arguments' row counts and as many columns as the sum of their columns. Each argument matrix is placed along the main diagonal of the result, and all other elements are zero.BLOCK({1, 2; 3, 4}, 5, {7; 8; 9}, {10, 11}) ⇒ 1 2 0 0 0 0 3 4 0 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 0 8 0 0 0 0 0 9 0 0 0 0 0 0 10 11 -
IDENT(N)
IDENT(NR, NC)
Returns an identity matrix, whose main diagonal elements are one and whose other elements are zero. The returned matrix hasNrows and columns orNRrows andNCcolumns, respectively.IDENT(1) ⇒ 1 IDENT(2) ⇒ 1 0 0 1 IDENT(3, 5) ⇒ 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 IDENT(5, 3) ⇒ 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 -
MAGIC(N)
Returns anN×Nmatrix that contains each of the integers 1…Nonce, in which each column, each row, and each diagonal sums to \(n(n^2+1)/2\). There are many magic squares with given dimensions, but this function always returns the same one for a given value of N.MAGIC(3) ⇒ {8, 1, 6; 3, 5, 7; 4, 9, 2} MAGIC(4) ⇒ {1, 5, 12, 16; 15, 11, 6, 2; 14, 8, 9, 3; 4, 10, 7, 13} -
MAKE(NR, NC, S)
Returns anNR×NCmatrix whose elements are allS.MAKE(1, 2, 3) ⇒ {3, 3} MAKE(2, 1, 4) ⇒ {4; 4} MAKE(2, 3, 5) ⇒ {5, 5, 5; 5, 5, 5} -
MDIAG(V)
GivenN-element vectorV, returns aN×Nmatrix whose main diagonal is copied fromV. The other elements in the returned vector are zero.Use
CALL SETDIAGto replace the main diagonal of a matrix in-place.MDIAG({1, 2, 3, 4}) ⇒ 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 -
RESHAPE(M, NR, NC)
Returns anNR×NCmatrix whose elements come fromM, which must have the same number of elements as the new matrix, copying elements fromMto the new matrix row by row.RESHAPE(1:12, 1, 12) ⇒ 1 2 3 4 5 6 7 8 9 10 11 12 RESHAPE(1:12, 2, 6) ⇒ 1 2 3 4 5 6 7 8 9 10 11 12 RESHAPE(1:12, 3, 4) ⇒ 1 2 3 4 5 6 7 8 9 10 11 12 RESHAPE(1:12, 4, 3) ⇒ 1 2 3 4 5 6 7 8 9 10 11 12 -
T(M)
TRANSPOS(M)
ReturnsMwith rows exchanged for columns.T({1, 2, 3}) ⇒ {1; 2; 3} T({1; 2; 3}) ⇒ {1, 2, 3} -
UNIFORM(NR, NC)
Returns aNR×NCmatrix in which each element is randomly chosen from a uniform distribution of real numbers between 0 and 1. Random number generation honors the current seed setting.The following example shows one possible output, but of course every result will be different (given different seeds):
UNIFORM(4, 5)*10 ⇒ 7.71 2.99 .21 4.95 6.34 4.43 7.49 8.32 4.99 5.83 2.25 .25 1.98 7.09 7.61 2.66 1.69 2.64 .88 1.50
Minimum, Maximum, and Sum Functions
-
CMIN(M)
CMAX(M)
CSUM(M)
CSSQ(M)
Returns a row vector with the same number of columns asM, in which each element is the minimum, maximum, sum, or sum of squares, respectively, of the elements in the same column ofM.CMIN({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {1, 2, 3} CMAX({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {7, 8, 9} CSUM({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {12, 15, 18} CSSQ({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {66, 93, 126} -
MMIN(M)
MMAX(M)
MSUM(M)
MSSQ(M)
Returns the minimum, maximum, sum, or sum of squares, respectively, of the elements ofM.MMIN({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ 1 MMAX({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ 9 MSUM({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ 45 MSSQ({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ 285 -
RMIN(M)
RMAX(M)
RSUM(M)
RSSQ(M)
Returns a column vector with the same number of rows asM, in which each element is the minimum, maximum, sum, or sum of squares, respectively, of the elements in the same row ofM.RMIN({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {1; 4; 7} RMAX({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {3; 6; 9} RSUM({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {6; 15; 24} RSSQ({1, 2, 3; 4, 5, 6; 7, 8, 9} ⇒ {14; 77; 194} -
SSCP(M)
Returns \({\bf M}^{\bf T} × \bf M\).SSCP({1, 2, 3; 4, 5, 6}) ⇒ {17, 22, 27; 22, 29, 36; 27, 36, 45} -
TRACE(M)
Returns the sum of the elements alongM's main diagonal, equivalent toMSUM(DIAG(M)).TRACE(MDIAG(1:5)) ⇒ 15
Matrix Property Functions
-
NROW(M)
NCOL(M)
Returns the number of row or columns, respectively, inM.NROW({1, 0; -2, -3; 3, 3}) ⇒ 3 NROW(1:5) ⇒ 1 NCOL({1, 0; -2, -3; 3, 3}) ⇒ 2 NCOL(1:5) ⇒ 5 -
DIAG(M)
Returns a column vector containing a copy of M's main diagonal. The vector's length is the lesser ofNCOL(M)andNROW(M).DIAG({1, 0; -2, -3; 3, 3}) ⇒ {1; -3}
Matrix Rank Ordering Functions
The GRADE and RANK functions each take a matrix M and return a
matrix R with the same dimensions. Each element in R ranges
between 1 and the number of elements N in M, inclusive. When the
elements in M all have unique values, both of these functions yield
the same results: the smallest element in M corresponds to value 1
in R, the next smallest to 2, and so on, up to the largest to N.
When multiple elements in M have the same value, these functions use
different rules for handling the ties.
-
GRADE(M)
Returns a ranking ofM, turning duplicate values into sequential ranks. The returned matrix always contains each of the integers 1 through the number of elements in the matrix exactly once.GRADE({1, 0, 3; 3, 1, 2; 3, 0, 5}) ⇒ {3, 1, 6; 7, 4, 5; 8, 2, 9} -
RNKORDER(M)
Returns a ranking ofM, turning duplicate values into the mean of their sequential ranks.RNKORDER({1, 0, 3; 3, 1, 2; 3, 0, 5}) ⇒ {3.5, 1.5, 7; 7, 3.5, 5; 7, 1.5, 9}
One may use GRADE to sort a vector:
COMPUTE v(GRADE(v))=v. /* Sort v in ascending order.
COMPUTE v(GRADE(-v))=v. /* Sort v in descending order.
Matrix Algebra Functions
-
CHOL(M)
MatrixMmust be anN×Nsymmetric positive-definite matrix. Returns anN×NmatrixBsuch that \({\bf B}^{\bf T}×{\bf B}=\bf M\).CHOL({4, 12, -16; 12, 37, -43; -16, -43, 98}) ⇒ 2 6 -8 0 1 5 0 0 3 -
DESIGN(M)
Returns a design matrix forM. The design matrix has the same number of rows asM. Each column C inM, from left to right, yields a group of columns in the output. For each unique valueVinC, from top to bottom, add a column to the output in whichVbecomes 1 and other values become 0.PSPP issues a warning if a column only contains a single unique value.
DESIGN({1; 2; 3}) ⇒ {1, 0, 0; 0, 1, 0; 0, 0, 1} DESIGN({5; 8; 5}) ⇒ {1, 0; 0, 1; 1, 0} DESIGN({1, 5; 2, 8; 3, 5}) ⇒ {1, 0, 0, 1, 0; 0, 1, 0, 0, 1; 0, 0, 1, 1, 0} DESIGN({5; 5; 5}) ⇒ (warning) -
DET(M)
Returns the determinant of square matrixM.DET({3, 7; 1, -4}) ⇒ -19 -
EVAL(M)
Returns a column vector containing the eigenvalues of symmetric matrixM, sorted in ascending order.Use
CALL EIGENto compute eigenvalues and eigenvectors of a matrix.EVAL({2, 0, 0; 0, 3, 4; 0, 4, 9}) ⇒ {11; 2; 1} -
GINV(M)
Returns theK×NmatrixAthat is the "generalized inverse" ofN×KmatrixM, defined such that \({\bf M}×{\bf A}×{\bf M}={\bf M}\) and \({\bf A}×{\bf M}×{\bf A}={\bf A}\).GINV({1, 2}) ⇒ {.2; .4} (approximately) {1:9} * GINV(1:9) * {1:9} ⇒ {1:9} (approximately) -
GSCH(M)
Mmust be aN×Mmatrix,M≥N, with rankN. Returns anN×Northonormal basis forM, obtained using the Gram-Schmidt process.GSCH({3, 2; 1, 2}) * SQRT(10) ⇒ {3, -1; 1, 3} (approximately) -
INV(M)
Returns theN×Nmatrix A that is the inverse ofN×Nmatrix M, defined such that \({\bf M}×{\bf A} = {\bf A}×{\bf M} = {\bf I}\), where I is the identity matrix. M must not be singular, that is, \(\det({\bf M}) ≠ 0\).INV({4, 7; 2, 6}) ⇒ {.6, -.7; -.2, .4} (approximately) -
KRONEKER(MA, MB)
Returns thePM×QNmatrix P that is the Kroneker product ofM×NmatrixMAandP×QmatrixMB. One may view P as the concatenation of multipleP×Qblocks, each of which is the scalar product ofMBby a different element ofMA. For example, whenAis a 2×2 matrix,KRONEKER(A, B)is equivalent to{A(1,1)*B, A(1,2)*B; A(2,1)*B, A(2,2)*B}.KRONEKER({1, 2; 3, 4}, {0, 5; 6, 7}) ⇒ 0 5 0 10 6 7 12 14 0 15 0 20 18 21 24 28 -
RANK(M)
Returns the rank of matrixM, a integer scalar whose value is the dimension of the vector space spanned by its columns or, equivalently, by its rows.RANK({1, 0, 1; -2, -3, 1; 3, 3, 0}) ⇒ 2 RANK({1, 1, 0, 2; -1, -1, 0, -2}) ⇒ 1 RANK({1, -1; 1, -1; 0, 0; 2, -2}) ⇒ 1 RANK({1, 2, 1; -2, -3, 1; 3, 5, 0}) ⇒ 2 RANK({1, 0, 2; 2, 1, 0; 3, 2, 1}) ⇒ 3 -
SOLVE(MA, MB)
MA must be anN×Nmatrix, with \(\det({\bf MA}) ≠ 0\), and MB anP×Qmatrix. Returns anP×Qmatrix X such that \({\bf MA} × {\bf X} = {\bf MB}\).All of the following examples show approximate results:
SOLVE({2, 3; 4, 9}, {6, 2; 15, 5}) ⇒ 1.50 .50 1.00 .33 SOLVE({1, 3, -2; 3, 5, 6; 2, 4, 3}, {5; 7; 8}) ⇒ -15.00 8.00 2.00 SOLVE({2, 1, -1; -3, -1, 2; -2, 1, 2}, {8; -11; -3}) ⇒ 2.00 3.00 -1.00 -
Given
P×QmatrixM, returns a \(\min(N,K)\)-element column vector containing the singular values ofMin descending order.Use
CALL SVDto compute the full singular value decomposition of a matrix.SVAL({1, 1; 0, 0}) ⇒ {1.41; .00} SVAL({1, 0, 1; 0, 1, 1; 0, 0, 0}) ⇒ {1.73; 1.00; .00} SVAL({2, 4; 1, 3; 0, 0; 0, 0}) ⇒ {5.46; .37} -
SWEEP(M, NK)
GivenP×QmatrixMand integer scalar \(k\) =NKsuch that \(1 ≤ k ≤ \min(R,C)\), returns theP×Qsweep matrix A.If \({\bf M}_{kk} ≠ 0\), then:
$$ \begin{align} A_{kk} &= 1/M_{kk},\\ A_{ik} &= -M_{ik}/M_{kk} \text{ for } i ≠ k,\\ A_{kj} &= M_{kj}/M_{kk} \text{ for } j ≠ k,\\ A_{ij} &= M_{ij} - M_{ik}M_{kj}/M_{kk} \text{ for } i ≠ k \text{ and } j ≠ k. \end{align} $$
If \({\bf M}_{kk}\) = 0, then:
$$ \begin{align} A_{ik} &= A_{ki} = 0, \\ A_{ij} &= M_{ij}, \text{ for } i ≠ k \text{ and } j ≠ k. \end{align} $$
Given
M = {0, 1, 2; 3, 4, 5; 6, 7, 8}, then (approximately):SWEEP(M, 1) ⇒ .00 .00 .00 .00 4.00 5.00 .00 7.00 8.00 SWEEP(M, 2) ⇒ -.75 -.25 .75 .75 .25 1.25 .75 -1.75 -.75 SWEEP(M, 3) ⇒ -1.50 -.75 -.25 -.75 -.38 -.63 .75 .88 .13
Matrix Statistical Distribution Functions
The matrix language can calculate several functions of standard statistical distributions using the same syntax and semantics as in PSPP transformation expressions. See Statistical Distribution Functions for details.
The matrix language extends the PDF, CDF, SIG, IDF, NPDF,
and NCDF functions by allowing the first parameters to each of these
functions to be a vector or matrix with any dimensions. In addition,
CDF.BVNOR and PDF.BVNOR allow either or both of their first two
parameters to be vectors or matrices; if both are non-scalar then they
must have the same dimensions. In each case, the result is a matrix
or vector with the same dimensions as the input populated with
elementwise calculations.
EOF Function
This function works with files being used on the READ statement.
-
EOF(FILE)Given a file handle or file name
FILE, returns an integer scalar 1 if the last line in the file has been read or 0 if more lines are available. Determining this requires attempting to read another line, which means thatREREADon the nextREADcommand followingEOFon the same file will be ineffective.
The EOF function gives a matrix program the flexibility to read a
file with text data without knowing the length of the file in advance.
For example, the following program will read all the lines of data in
data.txt, each consisting of three numbers, as rows in matrix data:
MATRIX.
COMPUTE data={}.
LOOP IF NOT EOF('data.txt').
READ row/FILE='data.txt'/FIELD=1 TO 1000/SIZE={1,3}.
COMPUTE data={data; row}.
END LOOP.
PRINT data.
END MATRIX.
COMPUTE Command
COMPUTE variable[(index[,index])]=expression.
The COMPUTE command evaluates an expression and assigns the
result to a variable or a submatrix of a variable. Assigning to a
submatrix uses the same syntax as the index
operator.
CALL Command
A matrix function returns a single result. The CALL command
implements procedures, which take a similar syntactic form to functions
but yield results by modifying their arguments rather than returning a
value.
Output arguments to a CALL procedure must be a single variable
name.
The following procedures are implemented via CALL to allow them to
return multiple results. For these procedures, the output arguments
need not name existing variables; if they do, then their previous
values are replaced:
-
Computes the eigenvalues and eigenvector of symmetric
N×NmatrixM. Assigns the eigenvectors ofMto the columns ofN×Nmatrix EVEC and the eigenvalues in descending order toN-element column vectorEVAL.Use the
EVALfunction to compute just the eigenvalues of a symmetric matrix.For example, the following matrix language commands:
CALL EIGEN({1, 0; 0, 1}, evec, eval). PRINT evec. PRINT eval. CALL EIGEN({3, 2, 4; 2, 0, 2; 4, 2, 3}, evec2, eval2). PRINT evec2. PRINT eval2.yield this output:
evec 1 0 0 1 eval 1 1 evec2 -.6666666667 .0000000000 .7453559925 -.3333333333 -.8944271910 -.2981423970 -.6666666667 .4472135955 -.5962847940 eval2 8.0000000000 -1.0000000000 -1.0000000000 -
Computes the singular value decomposition of
P×QmatrixM, assigningSaP×Qdiagonal matrix and toUandVunitaryP×Qmatrices such that M = U×S×V^T. The main diagonal ofQcontains the singular values ofM.Use the
SVALfunction to compute just the singular values of a matrix.For example, the following matrix program:
CALL SVD({3, 2, 2; 2, 3, -2}, u, s, v). PRINT (u * s * T(v))/FORMAT F5.1.yields this output:
(u * s * T(v)) 3.0 2.0 2.0 2.0 3.0 -2.0
The final procedure is implemented via CALL to allow it to modify a
matrix instead of returning a modified version. For this procedure,
the output argument must name an existing variable.
-
Replaces the main diagonal of
N×Pmatrix M by the contents ofK-element vectorV. IfK= 1, so thatVis a scalar, replaces all of the diagonal elements ofMbyV. If K < \min(N,P), only the upper K diagonal elements are replaced; if K > \min(N,P), then the extra elements of V are ignored.Use the
MDIAGfunction to construct a new matrix with a specified main diagonal.For example, this matrix program:
COMPUTE x={1, 2, 3; 4, 5, 6; 7, 8, 9}. CALL SETDIAG(x, 10). PRINT x.outputs the following:
x 10 2 3 4 10 6 7 8 10
PRINT Command
PRINT [expression]
[/FORMAT=format]
[/TITLE=title]
[/SPACE={NEWPAGE | n}]
[{/RLABELS=string… | /RNAMES=expression}]
[{/CLABELS=string… | /CNAMES=expression}].
The PRINT command is commonly used to display a matrix. It
evaluates the restricted EXPRESSION, if present, and outputs it either
as text or a pivot table, depending on the setting of
MDISPLAY.
Use the FORMAT subcommand to specify a format, such as F8.2, for
displaying the matrix elements. FORMAT is optional for numerical
matrices. When it is omitted, PSPP chooses how to format entries
automatically using \(m\), the magnitude of the largest-magnitude element in
the matrix to be displayed:
-
If \(m < 10^{11}\) and the matrix's elements are all integers, PSPP chooses the narrowest
Fformat that fits \(m\) plus a sign. For example, if the matrix is{1:10}, then \(m = 10\), which fits in 3 columns with room for a sign, the format isF3.0. -
Otherwise, if \(m ≥ 10^9\) or \(m ≤ 10^{-4}\), PSPP scales all of the numbers in the matrix by \(10^x\), where \(x\) is the exponent that would be used to display \(m\) in scientific notation. For example, for \(m = 5.123×10^{20}\), the scale factor is \(10^{20}\). PSPP displays the scaled values in format
F13.10and notes the scale factor in the output. -
Otherwise, PSPP displays the matrix values, without scaling, in format
F13.10.
The optional TITLE subcommand specifies a title for the output text
or table, as a quoted string. When it is omitted, the syntax of the
matrix expression is used as the title.
Use the SPACE subcommand to request extra space above the matrix
output. With a numerical argument, it adds the specified number of
lines of blank space above the matrix. With NEWPAGE as an argument,
it prints the matrix at the top of a new page. The SPACE subcommand
has no effect when a matrix is output as a pivot table.
The RLABELS and RNAMES subcommands, which are mutually exclusive,
can supply a label to accompany each row in the output. With RLABELS,
specify the labels as comma-separated strings or other tokens. With
RNAMES, specify a single expression that evaluates to a vector of
strings. Either way, if there are more labels than rows, the extra
labels are ignored, and if there are more rows than labels, the extra
rows are unlabeled. For output to a pivot table with RLABELS, the
labels can be any length; otherwise, the labels are truncated to 8
bytes.
The CLABELS and CNAMES subcommands work for labeling columns as
RLABELS and RNAMES do for labeling rows.
When the EXPRESSION is omitted, PRINT does not output a matrix.
Instead, it outputs only the text specified on TITLE, if any, preceded
by any space specified on the SPACE subcommand, if any. Any other
subcommands are ignored, and the command acts as if MDISPLAY is set to
TEXT regardless of its actual setting.
Example
The following syntax demonstrates two different ways to label the
rows and columns of a matrix with PRINT:
MATRIX.
COMPUTE m={1, 2, 3; 4, 5, 6; 7, 8, 9}.
PRINT m/RLABELS=a, b, c/CLABELS=x, y, z.
COMPUTE rlabels={"a", "b", "c"}.
COMPUTE clabels={"x", "y", "z"}.
PRINT m/RNAMES=rlabels/CNAMES=clabels.
END MATRIX.
With MDISPLAY=TEXT (the default), this program outputs the following
(twice):
m
x y z
a 1 2 3
b 4 5 6
c 7 8 9
With SET MDISPLAY=TABLES. added above MATRIX., the output becomes
the following (twice):
m
┌─┬─┬─┬─┐
│ │x│y│z│
├─┼─┼─┼─┤
│a│1│2│3│
│b│4│5│6│
│c│7│8│9│
└─┴─┴─┴─┘
DO IF Command
DO IF expression.
…matrix commands…
[ELSE IF expression.
…matrix commands…]…
[ELSE
…matrix commands…]
END IF.
A DO IF command evaluates its expression argument. If the DO IF
expression evaluates to true, then PSPP executes the associated
commands. Otherwise, PSPP evaluates the expression on each ELSE IF
clause (if any) in order, and executes the commands associated with the
first one that yields a true value. Finally, if the DO IF and all the
ELSE IF expressions all evaluate to false, PSPP executes the commands
following the ELSE clause (if any).
Each expression on DO IF and ELSE IF must evaluate to a scalar.
Positive scalars are considered to be true, and scalars that are zero or
negative are considered to be false.
Example
The following matrix language fragment sets b to the term
following a in the Juggler
sequence:
DO IF MOD(a, 2) = 0.
COMPUTE b = TRUNC(a &** (1/2)).
ELSE.
COMPUTE b = TRUNC(a &** (3/2)).
END IF.
LOOP and BREAK Commands
LOOP [var=first TO last [BY step]] [IF expression].
…matrix commands…
END LOOP [IF expression].
BREAK.
The LOOP command executes a nested group of matrix commands,
called the loop's "body", repeatedly. It has three optional clauses
that control how many times the loop body executes. Regardless of
these clauses, the global MXLOOPS setting, which defaults to 40,
also limits the number of iterations of a loop. To iterate more
times, raise the maximum with SET MXLOOPS outside
of the MATRIX command.
The optional index clause causes VAR to be assigned successive
values on each trip through the loop: first FIRST, then FIRST + STEP, then FIRST + 2 × STEP, and so on. The loop ends when VAR > LAST, for positive STEP, or VAR < LAST, for negative STEP. If
STEP is not specified, it defaults to 1. All the index clause
expressions must evaluate to scalars, and non-integers are rounded
toward zero. If STEP evaluates as zero (or rounds to zero), then
the loop body never executes.
The optional IF on LOOP is evaluated before each iteration
through the loop body. If its expression, which must evaluate to a
scalar, is zero or negative, then the loop terminates without executing
the loop body.
The optional IF on END LOOP is evaluated after each iteration
through the loop body. If its expression, which must evaluate to a
scalar, is zero or negative, then the loop terminates.
Example
The following computes and prints \(l(n)\), whose value is the number of steps in the Juggler sequence for \(n\), for \( 2 \le n \le 10\):
COMPUTE l = {}.
LOOP n = 2 TO 10.
COMPUTE a = n.
LOOP i = 1 TO 100.
DO IF MOD(a, 2) = 0.
COMPUTE a = TRUNC(a &** (1/2)).
ELSE.
COMPUTE a = TRUNC(a &** (3/2)).
END IF.
END LOOP IF a = 1.
COMPUTE l = {l; i}.
END LOOP.
PRINT l.
BREAK Command
The BREAK command may be used inside a loop body, ordinarily within a
DO IF command. If it is executed, then the loop terminates
immediately, jumping to the command just following END LOOP. When
multiple LOOP commands nest, BREAK terminates the innermost loop.
Example
The following example is a revision of the one above that shows how
BREAK could substitute for the index and IF clauses on LOOP and
END LOOP:
COMPUTE l = {}.
LOOP n = 2 TO 10.
COMPUTE a = n.
COMPUTE i = 1.
LOOP.
DO IF MOD(a, 2) = 0.
COMPUTE a = TRUNC(a &** (1/2)).
ELSE.
COMPUTE a = TRUNC(a &** (3/2)).
END IF.
DO IF a = 1.
BREAK.
END IF.
COMPUTE i = i + 1.
END LOOP.
COMPUTE l = {l; i}.
END LOOP.
PRINT l.
READ and WRITE Commands
The READ and WRITE commands perform matrix input and output with
text files. They share the following syntax for specifying how data is
divided among input lines:
/FIELD=first TO last [BY width]
[/FORMAT=format]
Both commands require the FIELD subcommand. It specifies the range
of columns, from FIRST to LAST, inclusive, that the data occupies on
each line of the file. The leftmost column is column 1. The columns
must be literal numbers, not expressions. To use entire lines, even if
they might be very long, specify a column range such as 1 TO 100000.
The FORMAT subcommand is optional for numerical matrices. For
string matrix input and output, specify an A format. In addition to
FORMAT, the optional BY specification on FIELD determine the
meaning of each text line:
-
With neither
BYnorFORMAT, the numbers in the text file are inFformat separated by spaces or commas. ForWRITE, PSPP uses as many digits of precision as needed to accurately represent the numbers in the matrix. -
BY widthdivides the input area into fixed-width fields with the given width. The input area must be a multiple of width columns wide. Numbers are read or written asFwidth.0format. -
FORMAT="countF"divides the input area into integer count equal-width fields per line. The input area must be a multiple of count columns wide. Another format type may be substituted forF. -
FORMAT=Fw[.d]divides the input area into fixed-width fields with widthw. The input area must be a multiple ofwcolumns wide. Another format type may be substituted forF. TheREADcommand disregardsd. -
FORMAT=Fspecifies formatFwithout indicating a field width. Another format type may be substituted forF. TheWRITEcommand accepts this form, but it has no effect unlessBYis also used to specify a field width.
If BY and FORMAT both specify or imply a field width, then they
must indicate the same field width.
READ Command
READ variable[(index[,index])]
[/FILE=file]
/FIELD=first TO last [BY width]
[/FORMAT=format]
[/SIZE=expression]
[/MODE={RECTANGULAR | SYMMETRIC}]
[/REREAD].
The READ command reads from a text file into a matrix variable.
Specify the target variable just after the command name, either just a
variable name to create or replace an entire variable, or a variable
name followed by an indexing expression to replace a submatrix of an
existing variable.
The FILE subcommand is required in the first READ command that
appears within MATRIX. It specifies the text file to be read,
either as a file name in quotes or a file handle previously declared
on FILE HANDLE. Later READ commands (in syntax
order) use the previous referenced file if FILE is omitted.
The FIELD and FORMAT subcommands specify how input lines are
interpreted. FIELD is required, but FORMAT is optional. See
READ and WRITE Commands, for details.
The SIZE subcommand is required for reading into an entire
variable. Its restricted expression argument should evaluate to a
2-element vector {N, M} or {N; M}, which indicates a N×M
matrix destination. A scalar N is also allowed and indicates a
N×1 column vector destination. When the destination is a submatrix,
SIZE is optional, and if it is present then it must match the size
of the submatrix.
By default, or with MODE=RECTANGULAR, the command reads an entry
for every row and column. With MODE=SYMMETRIC, the command reads only
the entries on and below the matrix's main diagonal, and copies the
entries above the main diagonal from the corresponding symmetric entries
below it. Only square matrices may use MODE=SYMMETRIC.
Ordinarily, each READ command starts from a new line in the text
file. Specify the REREAD subcommand to instead start from the last
line read by the previous READ command. This has no effect for the
first READ command to read from a particular file. It is also
ineffective just after a command that uses the EOF matrix
function on a particular file, because EOF has to
try to read the next line from the file to determine whether the file
contains more input.
Example 1: Basic Use
The following matrix program reads the same matrix {1, 2, 4; 2, 3, 5; 4, 5, 6} into matrix variables v, w, and x:
READ v /FILE='input.txt' /FIELD=1 TO 100 /SIZE={3, 3}.
READ w /FIELD=1 TO 100 /SIZE={3; 3} /MODE=SYMMETRIC.
READ x /FIELD=1 TO 100 BY 1/SIZE={3, 3} /MODE=SYMMETRIC.
given that input.txt contains the following:
1, 2, 4
2, 3, 5
4, 5, 6
1
2 3
4 5 6
1
23
456
The READ command will read as many lines of input as needed for a
particular row, so it's also acceptable to break any of the lines above
into multiple lines. For example, the first line 1, 2, 4 could be
written with a line break following either or both commas.
Example 2: Reading into a Submatrix
The following reads a 5×5 matrix from input2.txt, reversing the order
of the rows:
COMPUTE m = MAKE(5, 5, 0).
LOOP r = 5 TO 1 BY -1.
READ m(r, :) /FILE='input2.txt' /FIELD=1 TO 100.
END LOOP.
Example 3: Using REREAD
Suppose each of the 5 lines in a file input3.txt starts with an
integer COUNT followed by COUNT numbers, e.g.:
1 5
3 1 2 3
5 6 -1 2 5 1
2 8 9
3 1 3 2
Then, the following reads this file into a matrix m:
COMPUTE m = MAKE(5, 5, 0).
LOOP i = 1 TO 5.
READ count /FILE='input3.txt' /FIELD=1 TO 1 /SIZE=1.
READ m(i, 1:count) /FIELD=3 TO 100 /REREAD.
END LOOP.
WRITE Command
WRITE expression
[/OUTFILE=file]
/FIELD=first TO last [BY width]
[/FORMAT=format]
[/MODE={RECTANGULAR | TRIANGULAR}]
[/HOLD].
The WRITE command evaluates expression and writes its value to a
text file in a specified format. Write the expression to evaluate just
after the command name.
The OUTFILE subcommand is required in the first WRITE command that
appears within MATRIX. It specifies the text file to be written,
either as a file name in quotes or a file handle previously declared
on FILE HANDLE. Later WRITE commands (in syntax
order) use the previous referenced file if FILE is omitted.
The FIELD and FORMAT subcommands specify how output lines are
formed. FIELD is required, but FORMAT is optional. See READ
and WRITE Commands, for details.
By default, or with MODE=RECTANGULAR, the command writes an entry
for every row and column. With MODE=TRIANGULAR, the command writes
only the entries on and below the matrix's main diagonal. Entries above
the diagonal are not written. Only square matrices may be written with
MODE=TRIANGULAR.
Ordinarily, each WRITE command writes complete lines to the output
file. With HOLD, the final line written by WRITE will be held back
for the next WRITE command to augment. This can be useful to write
more than one matrix on a single output line.
Example 1: Basic Usage
This matrix program:
WRITE {1, 2; 3, 4} /OUTFILE='matrix.txt' /FIELD=1 TO 80.
writes the following to matrix.txt:
1 2
3 4
Example 2: Triangular Matrix
This matrix program:
WRITE MAGIC(5) /OUTFILE='matrix.txt' /FIELD=1 TO 80 BY 5 /MODE=TRIANGULAR.
writes the following to matrix.txt:
17
23 5
4 6 13
10 12 19 21
11 18 25 2 9
GET Command
GET variable[(index[,index])]
[/FILE={file | *}]
[/VARIABLES=variable…]
[/NAMES=variable]
[/MISSING={ACCEPT | OMIT | number}]
[/SYSMIS={OMIT | number}].
The READ command reads numeric data from an SPSS system file,
SPSS/PC+ system file, or SPSS portable file into a matrix variable or
submatrix:
-
To read data into a variable, specify just its name following
GET. The variable need not already exist; if it does, it is replaced. The variable will have as many columns as there are variables specified on theVARIABLESsubcommand and as many rows as there are cases in the input file. -
To read data into a submatrix, specify the name of an existing variable, followed by an indexing expression, just after
GET. The submatrix must have as many columns as variables specified onVARIABLESand as many rows as cases in the input file.
Specify the name or handle of the file to be read on FILE. Use
*, or simply omit the FILE subcommand, to read from the active file.
Reading from the active file is only permitted if it was already defined
outside MATRIX.
List the variables to be read as columns in the matrix on the
VARIABLES subcommand. The list can use TO for collections of
variables or ALL for all variables. If VARIABLES is omitted, all
variables are read. Only numeric variables may be read.
If a variable is named on NAMES, then the names of the variables
read as data columns are stored in a string vector within the given
name, replacing any existing matrix variable with that name. Variable
names are truncated to 8 bytes.
The MISSING and SYSMIS subcommands control the treatment of
missing values in the input file. By default, any user- or
system-missing data in the variables being read from the input causes an
error that prevents GET from executing. To accept missing values,
specify one of the following settings on MISSING:
-
ACCEPT: Accept user-missing values with no change.By default, system-missing values still yield an error. Use the
SYSMISsubcommand to change this treatment:-
OMIT: Skip any case that contains a system-missing value. -
number: Recode the system-missing value tonumber.
-
-
OMIT: Skip any case that contains any user- or system-missing value. -
number: Recode all user- and system-missing values tonumber.
The SYSMIS subcommand has an effect only with MISSING=ACCEPT.
SAVE Command
SAVE expression
[/OUTFILE={file | *}]
[/VARIABLES=variable…]
[/NAMES=expression]
[/STRINGS=variable…].
The SAVE matrix command evaluates expression and writes the
resulting matrix to an SPSS system file. In the system file, each
matrix row becomes a case and each column becomes a variable.
Specify the name or handle of the SPSS system file on the OUTFILE
subcommand, or * to write the output as the new active file. The
OUTFILE subcommand is required on the first SAVE command, in syntax
order, within MATRIX. For SAVE commands after the first, the
default output file is the same as the previous.
When multiple SAVE commands write to one destination within a
single MATRIX, the later commands append to the same output file. All
the matrices written to the file must have the same number of columns.
The VARIABLES, NAMES, and STRINGS subcommands are honored only for
the first SAVE command that writes to a given file.
By default, SAVE names the variables in the output file COL1
through COLn. Use VARIABLES or NAMES to give the variables
meaningful names. The VARIABLES subcommand accepts a comma-separated
list of variable names. Its alternative, NAMES, instead accepts an
expression that must evaluate to a row or column string vector of names.
The number of names need not exactly match the number of columns in the
matrix to be written: extra names are ignored; extra columns use default
names.
By default, SAVE assumes that the matrix to be written is all
numeric. To write string columns, specify a comma-separated list of the
string columns' variable names on STRINGS.
MGET Command
MGET [/FILE=file]
[/TYPE={COV | CORR | MEAN | STDDEV | N | COUNT}].
The MGET command reads the data from a matrix file into matrix variables.
All of MGET's subcommands are optional. Specify the name or handle
of the matrix file to be read on the FILE subcommand; if it is
omitted, then the command reads the active file.
By default, MGET reads all of the data from the matrix file.
Specify a space-delimited list of matrix types on TYPE to limit the
kinds of data to the one specified:
COV: Covariance matrix.CORR: Correlation coefficient matrix.MEAN: Vector of means.STDDEV: Vector of standard deviations.N: Vector of case counts.COUNT: Vector of counts.
MGET reads the entire matrix file and automatically names, creates,
and populates matrix variables using its contents. It constructs the
name of each variable by concatenating the following:
-
A 2-character prefix that identifies the type of the matrix:
CV: Covariance matrix.CR: Correlation coefficient matrix.MN: Vector of means.SD: Vector of standard deviations.NC: Vector of case counts.CN: Vector of counts.
-
If the matrix file has factor variables,
Fn, wherenis a number identifying a group of factors:F1for the first group,F2for the second, and so on. This part is omitted for pooled data (where the factors all have the system-missing value). -
If the matrix file has split file variables,
Sn, where n is a number identifying a split group:S1for the first group,S2for the second, and so on.
If MGET chooses the name of an existing variable, it issues a
warning and does not change the variable.
MSAVE Command
MSAVE expression
/TYPE={COV | CORR | MEAN | STDDEV | N | COUNT}
[/FACTOR=expression]
[/SPLIT=expression]
[/OUTFILE=file]
[/VARIABLES=variable…]
[/SNAMES=variable…]
[/FNAMES=variable…].
The MSAVE command evaluates the expression specified just after the
command name, and writes the resulting matrix to a matrix file.
The TYPE subcommand is required. It specifies the ROWTYPE_ to
write along with this matrix.
The FACTOR and SPLIT subcommands are required on the first
MSAVE if and only if the matrix file has factor or split variables,
respectively. After that, their values are carried along from one
MSAVE command to the next in syntax order as defaults. Each one takes
an expression that must evaluate to a vector with the same number of
entries as the matrix has factor or split variables, respectively. Each
MSAVE only writes data for a single combination of factor and split
variables, so many MSAVE commands (or one inside a loop) may be needed
to write a complete set.
The remaining MSAVE subcommands define the format of the matrix
file. All of the MSAVE commands within a given matrix program write
to the same matrix file, so these subcommands are only meaningful on the
first MSAVE command within a matrix program. (If they are given again
on later MSAVE commands, then they must have the same values as on the
first.)
The OUTFILE subcommand specifies the name or handle of the matrix
file to be written. Output must go to an external file, not a data set
or the active file.
The VARIABLES subcommand specifies a comma-separated list of the
names of the continuous variables to be written to the matrix file. The
TO keyword can be used to define variables named with consecutive
integer suffixes. These names become column names and names that appear
in VARNAME_ in the matrix file. ROWTYPE_ and VARNAME_ are not
allowed on VARIABLES. If VARIABLES is omitted, then PSPP uses the
names COL1, COL2, and so on.
The FNAMES subcommand may be used to supply a comma-separated list
of factor variable names. The default names are FAC1, FAC2, and so
on.
The SNAMES subcommand can supply a comma-separated list of split
variable names. The default names are SPL1, SPL2, and so on.
DISPLAY Command
DISPLAY [{DICTIONARY | STATUS}].
The DISPLAY command makes PSPP display a table with the name and
dimensions of each matrix variable. The DICTIONARY and STATUS
keywords are accepted but have no effect.
RELEASE Command
RELEASE variable….
The RELEASE command accepts a comma-separated list of matrix
variable names. It deletes each variable and releases the memory
associated with it.
The END MATRIX command releases all matrix variables.
Utility Commands
This chapter describes commands that don't fit in other categories.
Most of these commands are not affected by commands like IF and
LOOP: they take effect only once, unconditionally, at the time that
they are encountered in the input.
ADD DOCUMENT
ADD DOCUMENT
'line one' 'line two' ... 'last line' .
ADD DOCUMENT adds one or more lines of descriptive commentary to
the active dataset. Documents added in this way are saved to system
files. They can be viewed using SYSFILE INFO or DISPLAY DOCUMENTS.
They can be removed from the active dataset with DROP DOCUMENTS.
Each line of documentary text must be enclosed in quotation marks, and
may not be more than 80 bytes long. See also
DOCUMENT.
CACHE
CACHE.
This command is accepted, for compatibility, but it has no effect.
CD
CD 'new directory' .
CD changes the current directory. The new directory becomes that
specified by the command.
COMMENT
Comment commands:
COMMENT comment text ... .
*comment text ... .
Comments within a line of syntax:
FREQUENCIES /VARIABLES=v0 v1 v2. /* All our categorical variables.
COMMENT is ignored. It is used to provide information to the
author and other readers of the PSPP syntax file.
COMMENT can extend over any number of lines. It ends at a dot at
the end of a line or a blank line. The comment may contain any
characters.
PSPP also supports comments within a line of syntax, introduced with
/*. These comments end at the first */ or at the end of the line,
whichever comes first. A line that contains just this kind of comment
is considered blank and ends the current command.
DOCUMENT
DOCUMENT DOCUMENTARY_TEXT.
DOCUMENT adds one or more lines of descriptive commentary to the
active dataset. Documents added in this way are saved to system
files. They can be viewed using SYSFILE INFO or DISPLAY DOCUMENTS. They can be removed from the
active dataset with DROP DOCUMENTS.
Specify the text of the document following the DOCUMENT keyword. It
is interpreted literally—any quotes or other punctuation marks are
included in the file. You can extend the documentary text over as
many lines as necessary, including blank lines to separate paragraphs.
Lines are truncated at 80 bytes. Don't forget to terminate the
command with a dot at the end of a line. See also ADD
DOCUMENT.
DISPLAY DOCUMENTS
DISPLAY DOCUMENTS.
DISPLAY DOCUMENTS displays the documents in the active dataset.
Each document is preceded by a line giving the time and date that it
was added. See also DOCUMENT.
DISPLAY FILE LABEL
DISPLAY FILE LABEL.
DISPLAY FILE LABEL displays the file label contained in the active
dataset, if any. See also FILE LABEL.
This command is a PSPP extension.
DROP DOCUMENTS
DROP DOCUMENTS.
DROP DOCUMENTS removes all documents from the active dataset. New
documents can be added with DOCUMENT.
DROP DOCUMENTS changes only the active dataset. It does not modify
any system files stored on disk.
ECHO
ECHO 'arbitrary text' .
Use ECHO to write arbitrary text to the output stream. The text
should be enclosed in quotation marks following the normal rules for
string tokens.
string tokens.
ERASE
ERASE FILE "FILE_NAME".
ERASE FILE deletes a file from the local file system. The file's
name must be quoted. This command cannot be used if the
SAFER setting is active.
EXECUTE
EXECUTE.
EXECUTE causes the active dataset to be read and all pending
transformations to be executed.
FILE LABEL
FILE LABEL file label.
FILE LABEL provides a title for the active dataset. This title is
saved into system files and portable files that are created during
this PSPP run.
The file label should not be quoted. If quotes are included, they are become part of the file label.
FINISH
FINISH.
FINISH terminates the current PSPP session and returns control to
the operating system.
HOST
In the syntax below, the square brackets must be included in the command syntax and do not indicate that that their contents are optional.
HOST COMMAND=['COMMAND'...]
TIMELIMIT=SECS.
HOST executes one or more commands, each provided as a string in
the required COMMAND subcommand, in the shell of the underlying
operating system. PSPP runs each command in a separate shell process
and waits for it to finish before running the next one. If a command
fails (with a nonzero exit status, or because it is killed by a signal),
then PSPP does not run any remaining commands.
PSPP provides /dev/null as the shell's standard input. If a
process needs to read from stdin, redirect from a file or device, or use
a pipe.
PSPP displays the shell's standard output and standard error as PSPP
output. Redirect to a file or /dev/null or another device if this is
not desired.
By default, PSPP waits as long as necessary for the series of
commands to complete. Use the optional TIMELIMIT subcommand to limit
the execution time to the specified number of seconds.
PSPP built for mingw does not support all the features of HOST.
PSPP rejects this command if the SAFER setting is
active.
Example
The following example runs rsync to copy a file from a remote
server to the local file data.txt, writing rsync's own output to
rsync-log.txt. PSPP displays the command's error output, if any. If
rsync needs to prompt the user (e.g. to obtain a password), the
command fails. Only if the rsync succeeds, PSPP then runs the
sha512sum command.
HOST COMMAND=['rsync remote:data.txt data.txt > rsync-log.txt'
'sha512sum -c data.txt.sha512sum].
INCLUDE
INCLUDE [FILE=]'FILE_NAME' [ENCODING='ENCODING'].
INCLUDE causes the PSPP command processor to read an additional
command file as if it were included bodily in the current command file.
If errors are encountered in the included file, then command processing
stops and no more commands are processed. Include files may be nested
to any depth, up to the limit of available memory.
The INSERT command is a more flexible alternative to
INCLUDE. An INCLUDE command acts the same as INSERT with
ERROR=STOP CD=NO SYNTAX=BATCH specified.
The optional ENCODING subcommand has the same meaning as with
INSERT.
INSERT
INSERT [FILE=]'FILE_NAME'
[CD={NO,YES}]
[ERROR={CONTINUE,STOP}]
[SYNTAX={BATCH,INTERACTIVE}]
[ENCODING={LOCALE, 'CHARSET_NAME'}].
INSERT is similar to INCLUDE but more flexible. It
causes the command processor to read a file as if it were embedded in
the current command file.
If CD=YES is specified, then before including the file, the current
directory becomes the directory of the included file. The default
setting is CD=NO. This directory remains current until it is
changed explicitly (with the CD command, or a subsequent INSERT
command with the CD=YES option). It does not revert to its original
setting even after the included file is finished processing.
If ERROR=STOP is specified, errors encountered in the inserted file
causes processing to immediately cease. Otherwise processing continues
at the next command. The default setting is ERROR=CONTINUE.
If SYNTAX=INTERACTIVE is specified then the syntax contained in the
included file must conform to interactive syntax
conventions. The default
conventions](../language/basics/syntax-variants.md). The default
setting is SYNTAX=BATCH.
ENCODING optionally specifies the character set used by the
included file. Its argument, which is not case-sensitive, must be in
one of the following forms:
-
LOCALE
The encoding used by the system locale, or as overridden bySET LOCALE. On GNU/Linux and other Unix-like systems, environment variables, e.g.LANGorLC_ALL, determine the system locale. -
'CHARSET_NAME'
An IANA character set name. Some examples areASCII(United States),ISO-8859-1(western Europe),EUC-JP(Japan), andwindows-1252(Windows). Not all systems support all character sets. -
Auto,ENCODING
Automatically detects whether a syntax file is encoded in a Unicode encoding such as UTF-8, UTF-16, or UTF-32. If it is not, then PSPP generally assumes that the file is encoded inENCODING(an IANA character set name). However, ifENCODINGis UTF-8, and the syntax file is not valid UTF-8, PSPP instead assumes that the file is encoded inwindows-1252.For best results,
ENCODINGshould be an ASCII-compatible encoding (the most common locale encodings are all ASCII-compatible), because encodings that are not ASCII compatible cannot be automatically distinguished from UTF-8. -
Auto
Auto,Locale
Automatic detection, as above, with the default encoding taken from the system locale or the setting onSET LOCALE.
When ENCODING is not specified, the default is taken from the
--syntax-encoding command option, if it was specified, and otherwise
it is Auto.
OUTPUT
In the syntax below, the characters [ and ] are literals. They
must appear in the syntax to be interpreted:
OUTPUT MODIFY
/SELECT TABLES
/TABLECELLS SELECT = [ CLASS... ]
FORMAT = FMT_SPEC.
OUTPUT changes the appearance of the tables in which results are
printed. In particular, it can be used to set the format and precision
to which results are displayed.
After running this command, the default table appearance parameters will have been modified and each new output table generated uses the new parameters.
Following /TABLECELLS SELECT = a list of cell classes must appear,
enclosed in square brackets. This list determines the classes of values
should be selected for modification. Each class can be:
-
RESIDUAL: Residual values. Default:F40.2. -
CORRELATION: Correlations. Default:F40.3. -
PERCENT: Percentages. Default:PCT40.1. -
SIGNIFICANCE: Significance of tests (p-values). Default:F40.3. -
COUNT: Counts or sums of weights. For a weighted data set, the default is the weight variable's print format. For an unweighted data set, the default isF40.0.
For most other numeric values that appear in tables, SET FORMAT) may be used to specify the format.
FMT_SPEC must be a valid output
format. Not all possible
format](../language/datasets/formats/index.md). Not all possible
formats are meaningful for all classes.
PERMISSIONS
PERMISSIONS
FILE='FILE_NAME'
/PERMISSIONS = {READONLY,WRITEABLE}.
PERMISSIONS changes the permissions of a file. There is one
mandatory subcommand which specifies the permissions to which the file
should be changed. If you set a file's permission to READONLY, then
the file will become unwritable either by you or anyone else on the
system. If you set the permission to WRITEABLE, then the file
becomes writeable by you; the permissions afforded to others are
unchanged. This command cannot be used if the SAFER
setting is active.
PRESERVE…RESTORE
PRESERVE.
...
RESTORE.
PRESERVE saves all of the settings that SET can adjust.
A later RESTORE command restores those settings.
PRESERVE can be nested up to five levels deep.
SET
SET
(data input)
/BLANKS={SYSMIS,'.',number}
/DECIMAL={DOT,COMMA}
/FORMAT=FMT_SPEC
/EPOCH={AUTOMATIC,YEAR}
/RIB={NATIVE,MSBFIRST,LSBFIRST}
(interaction)
/MXERRS=MAX_ERRS
/MXWARNS=MAX_WARNINGS
/WORKSPACE=WORKSPACE_SIZE
(syntax execution)
/LOCALE='LOCALE'
/MXLOOPS=MAX_LOOPS
/SEED={RANDOM,SEED_VALUE}
/UNDEFINED={WARN,NOWARN}
/FUZZBITS=FUZZBITS
/SCALEMIN=COUNT
(data output)
/CC{A,B,C,D,E}='STRING'
/DECIMAL={DOT,COMMA}
/FORMAT=FMT_SPEC
/LEADZERO={ON,OFF}
/MDISPLAY={TEXT,TABLES}
/SMALL=NUMBER
/WIB={NATIVE,MSBFIRST,LSBFIRST}
(output routing)
/ERRORS={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
/MESSAGES={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
/PRINTBACK={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
/RESULTS={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
(output driver options)
/HEADERS={NO,YES,BLANK}
/LENGTH={NONE,N_LINES}
/WIDTH={NARROW,WIDTH,N_CHARACTERS}
/TNUMBERS={VALUES,LABELS,BOTH}
/TVARS={NAMES,LABELS,BOTH}
/TLOOK={NONE,FILE}
(journal)
/JOURNAL={ON,OFF} ['FILE_NAME']
(system files)
/SCOMPRESSION={ON,OFF}
(security)
/SAFER=ON
/LOCALE='STRING'
(macros)
/MEXPAND={ON,OFF}
/MPRINT={ON,OFF}
/MITERATE=NUMBER
/MNEST=NUMBER
(not yet implemented)
/BASETEXTDIRECTION={AUTOMATIC,RIGHTTOLEFT,LEFTTORIGHT}
/BLOCK='C'
/BOX={'XXX','XXXXXXXXXXX'}
/CACHE={ON,OFF}
/CELLSBREAK=NUMBER
/COMPRESSION={ON,OFF}
/CMPTRANS={ON,OFF}
/HEADER={NO,YES,BLANK}
SET allows the user to adjust several parameters relating to PSPP's
execution. Since there are many subcommands to this command, its
subcommands are examined in groups.
For subcommands that take boolean values, ON and YES are
synonymous, as are OFF and NO, when used as subcommand values.
- Data Input
- Interaction
- Syntax Execution
- Data Output
- Output Routing
- Output Driver
- Journal
- System Files
- Security
- Macros
- Not Yet Implemented
Data Input
SET
/BLANKS={SYSMIS,'.',number}
/DECIMAL={DOT,COMMA}
/FORMAT=FMT_SPEC
/EPOCH={AUTOMATIC,YEAR}
/RIB={NATIVE,MSBFIRST,LSBFIRST}
The data input subcommands affect the way that data is read from data files. The data input subcommands are:
-
BLANKS
This is the value assigned to an item data item that is empty or contains only white space. An argument of SYSMIS or '.' causes the system-missing value to be assigned to null items. This is the default. Any real value may be assigned. -
DECIMAL
This value may be set toDOTorCOMMA. Setting it toDOTcauses the decimal point character to be.and the grouping character to be,. Setting it toCOMMAcauses the decimal point character to be,and the grouping character to be.. If the setting isCOMMA, then,is not treated as a field separator in theDATA LISTcommand. The default value is determined from the system locale. -
FORMAT
Changes the default numeric input/output format. The default is initiallyF8.2. -
EPOCH
Specifies the range of years used when a 2-digit year is read from a data file or used in a date construction expression. If a 4-digit year is specified for the epoch, then 2-digit years are interpreted starting from that year, known as the epoch. IfAUTOMATIC(the default) is specified, then the epoch begins 69 years before the current date. -
RIB
PSPP extension to set the byte ordering (endianness) used for reading data inIBorPIBformat. InMSBFIRSTordering, the most-significant byte appears at the left end of a IB or PIB field. InLSBFIRSTordering, the least-significant byte appears at the left end.NATIVE, the default, is equivalent toMSBFIRSTorLSBFIRSTdepending on the native format of the machine running PSPP.
Interaction
SET
/MXERRS=MAX_ERRS
/MXWARNS=MAX_WARNINGS
/WORKSPACE=WORKSPACE_SIZE
Interaction subcommands affect the way that PSPP interacts with an online user. The interaction subcommands are
-
MXERRS
The maximum number of errors before PSPP halts processing of the current command file. The default is 50. -
MXWARNS
The maximum number of warnings + errors before PSPP halts processing the current command file. The special value of zero means that all warning situations should be ignored. No warnings are issued, except a single initial warning advising you that warnings will not be given. The default value is 100.
Syntax Execution
SET
/LOCALE='LOCALE'
/MXLOOPS=MAX_LOOPS
/SEED={RANDOM,SEED_VALUE}
/UNDEFINED={WARN,NOWARN}
/FUZZBITS=FUZZBITS
/SCALEMIN=COUNT
Syntax execution subcommands control the way that PSPP commands execute. The syntax execution subcommands are
-
LOCALE
Overrides the system locale for the purpose of reading and writing syntax and data files. The argument should be a locale name in the general formLANGUAGE_COUNTRY.ENCODING, whereLANGUAGEandCOUNTRYare 2-character language and country abbreviations, respectively, andENCODINGis an IANA character set name. Example locales areen_US.UTF-8(UTF-8 encoded English as spoken in the United States) andja_JP.EUC-JP(EUC-JP encoded Japanese as spoken in Japan). -
MXLOOPS
The maximum number of iterations for an uncontrolledLOOP, and for any loop in the matrix language. The defaultMXLOOPSis 40. -
SEED
The initial pseudo-random number seed. Set it to a real number or toRANDOM, to obtain an initial seed from the current time of day. -
UNDEFINED
Currently not used. -
FUZZBITS
The maximum number of bits of errors in the least-significant places to accept for rounding up a value that is almost halfway between two possibilities for rounding with the RND. The default FUZZBITS is 6. -
SCALEMIN
The minimum number of distinct valid values for PSPP to assume that a variable has a scale measurement level. -
WORKSPACE
The maximum amount of memory (in kilobytes) that PSPP uses to store data being processed. If memory in excess of the workspace size is required, then PSPP starts to use temporary files to store the data. Setting a higher value means that procedures run faster, but may cause other applications to run slower. On platforms without virtual memory management, setting a very large workspace may cause PSPP to abort.
Data Output
SET
/CC{A,B,C,D,E}='STRING'
/DECIMAL={DOT,COMMA}
/FORMAT=FMT_SPEC
/LEADZERO={ON,OFF}
/MDISPLAY={TEXT,TABLES}
/SMALL=NUMBER
/WIB={NATIVE,MSBFIRST,LSBFIRST}
Data output subcommands affect the format of output data. These subcommands are
-
CCA
CCB
CCC
CCD
CCE
Set up custom currency formats. -
DECIMAL
The defaultDOTsetting causes the decimal point character to be.. A setting ofCOMMAcauses the decimal point character to be,. -
FORMAT
Allows the default numeric input/output format to be specified. The default isF8.2. -
LEADZERO
Controls whether numbers with magnitude less than one are displayed with a zero before the decimal point. For example, withSET LEADZERO=OFF, which is the default, one-half is shown as 0.5, and withSET LEADZERO=ON, it is shown as .5. This setting affects only theF,COMMA, andDOTformats. -
MDISPLAY
Controls how thePRINTcommand withinMATRIX...END MATRIXoutputs matrices. With the defaultTEXT,PRINToutputs matrices as text. Change this setting toTABLESto instead output matrices as pivot tables. -
SMALL
This controls how PSPP formats small numbers in pivot tables, in cases where PSPP does not otherwise have a well-defined format for the numbers. When such a number has a magnitude less than the value set here, PSPP formats the number in scientific notation; otherwise, it formats it in standard notation. The default is 0.0001. Set a value of 0 to disable scientific notation. -
WIB
PSPP extension to set the byte ordering (endianness) used for writing data inIBorPIBformat. InMSBFIRSTordering, the most-significant byte appears at the left end of a IB or PIB field. InLSBFIRSTordering, the least-significant byte appears at the left end.NATIVE, the default, is equivalent toMSBFIRSTorLSBFIRSTdepending on the native format of the machine running PSPP.
Output Routing
SET
/ERRORS={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
/MESSAGES={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
/PRINTBACK={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
/RESULTS={ON,OFF,TERMINAL,LISTING,BOTH,NONE}
In the PSPP text-based interface, the output routing subcommands affect where output is sent. The following values are allowed for each of these subcommands:
-
OFF
NONE
Discard this kind of output. -
TERMINAL
Write this output to the terminal, but not to listing files and other output devices. -
LISTING
Write this output to listing files and other output devices, but not to the terminal. -
ON
BOTH
Write this type of output to all output devices.
These output routing subcommands are:
-
ERRORS
Applies to error and warning messages. The default isBOTH. -
MESSAGES
Applies to notes. The default isBOTH. -
PRINTBACK
Determines whether the syntax used for input is printed back as part of the output. The default isNONE. -
RESULTS
Applies to everything not in one of the above categories, such as the results of statistical procedures. The default isBOTH.
These subcommands have no effect on output in the PSPP GUI environment.
Output Driver
SET
/HEADERS={NO,YES,BLANK}
/LENGTH={NONE,N_LINES}
/WIDTH={NARROW,WIDTH,N_CHARACTERS}
/TNUMBERS={VALUES,LABELS,BOTH}
/TVARS={NAMES,LABELS,BOTH}
/TLOOK={NONE,FILE}
Output driver option subcommands affect output drivers' settings. These subcommands are:
-
HEADERS -
LENGTH -
TNUMBERS
TheTNUMBERSoption sets the way in which values are displayed in output tables. The valid settings areVALUES,LABELSandBOTH. IfTNUMBERSis set toVALUES, then all values are displayed with their literal value (which for a numeric value is a number and for a string value an alphanumeric string). IfTNUMBERSis set toLABELS, then values are displayed using their assigned value labels, if any. If the value has no label, then the literal value is used for display. IfTNUMBERSis set toBOTH, then values are displayed with both their label (if any) and their literal value in parentheses. -
TVARS
TheTVARSoption sets the way in which variables are displayed in output tables. The valid settings areNAMES,LABELSandBOTH. IfTVARSis set toNAMES, then all variables are displayed using their names. IfTVARSis set toLABELS, then variables are displayed using their variable label, if one has been set. If no label has been set, then the name is used. IfTVARSis set toBOTH, then variables are displayed with both their label (if any) and their name in parentheses. -
TLOOK
TheTLOOKoption sets the style used for subsequent table output. SpecifyingNONEmakes PSPP use the default built-in style. Otherwise, specifying FILE makes PSPP search for an.sttor.tlofile in the same way as specifying--table-look=FILEthe PSPP command line (*note Main Options::).
Journal
SET
/JOURNAL={ON,OFF} ['FILE_NAME']
Journal subcommands affect logging of commands executed to external files. These subcommands are
-
JOURNAL
LOG
These subcommands, which are synonyms, control the journal. The default isON, which causes commands entered interactively to be written to the journal file. Commands included from syntax files that are included interactively and error messages printed by PSPP are also written to the journal file, prefixed by>.OFFdisables use of the journal.The journal is named
pspp.jnlby default. A different name may be specified.
System Files
SET
/SCOMPRESSION={ON,OFF}
System file subcommands affect the default format of system files produced by PSPP. These subcommands are
Security
SET
/SAFER=ON
/LOCALE='STRING'
Security subcommands affect the operations that commands are allowed to perform. The security subcommands are
-
SAFER
Setting this option disables the following operations:- The
ERASEcommand. - The
HOSTcommand. - The
PERMISSIONScommand. - Pipes (file names beginning or ending with
|).
Be aware that this setting does not guarantee safety (commands can still overwrite files, for instance) but it is an improvement. When set, this setting cannot be reset during the same session, for obvious security reasons.
- The
-
LOCALE
This item is used to set the default character encoding. The encoding may be specified either as an IANA encoding name or alias, or as a locale name. If given as a locale name, only the character encoding of the locale is relevant.System files written by PSPP use this encoding. System files read by PSPP, for which the encoding is unknown, are interpreted using this encoding.
The full list of valid encodings and locale names/alias are operating system dependent. The following are all examples of acceptable syntax on common GNU/Linux systems.
SET LOCALE='iso-8859-1'. SET LOCALE='ru_RU.cp1251'. SET LOCALE='japanese'.Contrary to intuition, this command does not affect any aspect of the system's locale.
Macros
SET
/MEXPAND={ON,OFF}
/MPRINT={ON,OFF}
/MITERATE=NUMBER
/MNEST=NUMBER
The following subcommands affect the interpretation of macros. For more information, see Macro Settings.
-
MEXPAND
Controls whether macros are expanded. The default isON. -
MPRINT
Controls whether the expansion of macros is included in output. This is separate from whether command syntax in general is included in output. The default isOFF. -
MITERATE
Limits the number of iterations executed in!DOloops within macros. This does not affect other language constructs such asLOOP…END LOOP. This must be set to a positive integer. The default is 1000. -
MNEST
Limits the number of levels of nested macro expansions. This must be set to a positive integer. The default is 50.
Not Yet Implemented
SET
/BASETEXTDIRECTION={AUTOMATIC,RIGHTTOLEFT,LEFTTORIGHT}
/BLOCK='C'
/BOX={'XXX','XXXXXXXXXXX'}
/CACHE={ON,OFF}
/CELLSBREAK=NUMBER
/COMPRESSION={ON,OFF}
/CMPTRANS={ON,OFF}
/HEADER={NO,YES,BLANK}
The following subcommands are not yet implemented, but PSPP accepts them and ignores the settings:
BASETEXTDIRECTIONBLOCKBOXCACHECELLSBREAKCOMPRESSIONCMPTRANSHEADER
SHOW
SHOW
[ALL]
[BLANKS]
[CC]
[CCA]
[CCB]
[CCC]
[CCD]
[CCE]
[COPYING]
[DECIMAL]
[DIRECTORY]
[ENVIRONMENT]
[FORMAT]
[FUZZBITS]
[LENGTH]
[MEXPAND]
[MPRINT]
[MITERATE]
[MNEST]
[MXERRS]
[MXLOOPS]
[MXWARNS]
[N]
[SCOMPRESSION]
[SYSTEM]
[TEMPDIR]
[UNDEFINED]
[VERSION]
[WARRANTY]
[WEIGHT]
[WIDTH]
SHOW displays PSPP's settings and status. Parameters that can be
changed using SET, can be examined using SHOW using the
subcommand with the same name. SHOW supports the following
additional subcommands:
ALL
Show all settings.CC
Show all custom currency settings (CCAthroughCCE).DIRECTORY
Shows the current working directory.ENVIRONMENT
Shows the operating system details.N
Reports the number of cases in the active dataset. The reported number is not weighted. If no dataset is defined, thenUnknownis reported.SYSTEM
Shows information about how PSPP was built. This information is useful in bug reports.TEMPDIR
Shows the path of the directory where temporary files are stored.VERSION
Shows the version of this installation of PSPP.WARRANTY
Show details of the lack of warranty for PSPP.COPYINGorLICENSE
Display the terms of PSPP's copyright licence.
Specifying SHOW without any subcommands is equivalent to SHOW ALL.
SUBTITLE
SUBTITLE 'SUBTITLE_STRING'.
or
SUBTITLE SUBTITLE_STRING.
SUBTITLE provides a subtitle to a particular PSPP run. This
subtitle appears at the top of each output page below the title, if
headers are enabled on the output device.
Specify a subtitle as a string in quotes. The alternate syntax that did not require quotes is now obsolete. If it is used then the subtitle is converted to all uppercase.
TITLE
TITLE 'TITLE_STRING'.
or
TITLE TITLE_STRING.
TITLE provides a title to a particular PSPP run. This title
appears at the top of each output page, if headers are enabled on the
output device.
Specify a title as a string in quotes. The alternate syntax that did not require quotes is now obsolete. If it is used then the title is converted to all uppercase.
System File Format
An SPSS system file holds a set of cases and dictionary information that describes how they may be interpreted. The system file format dates back 40+ years and has evolved greatly over that time to support new features, but in a way to facilitate interchange between even the oldest and newest versions of software. This chapter describes the system file format.
- Introduction
- System File Record Structure
- File Header Record
- Variable Record
- Value Labels Records
- Document Record
- Machine Integer Info Record
- Machine Floating-Point Info Record
- Multiple Response Sets Records
- Extra Product Info Record
- Variable Display Parameter Record
- Variable Sets Record
- Long Variable Names Record
- Very Long String Record
- Character Encoding Record
- Long String Value Labels Record
- Long String Missing Values Record
- Data File and Variable Attributes Records
- Extended Number of Cases Record
- Other Informational Records
- Dictionary Termination Record
- Data Record
Introduction
System files use four data types: 8-bit characters, 32-bit integers,
64-bit integers, and 64-bit floating points, called here char’, int32’, int64’, and flt64’, respectively. Data is not necessarily
aligned on a word or double-word boundary: the long variable name
record and very long string
record have arbitrary byte length and can
therefore cause all data coming after them in the file to be
misaligned.
Integer data in system files may be big-endian or little-endian. A
reader may detect the endianness of a system file by examining
layout_code in the file header record.
Floating-point data in system files may nominally be in IEEE 754, IBM,
or VAX formats. A reader may detect the floating-point format in use
by examining bias in the file header record.
Only files with IEEE 754 floating point data have actually been
encountered.
PSPP detects big-endian and little-endian integer formats in system files and translates as necessary. PSPP also detects the floating-point format in use, as well as the endianness of IEEE 754 floating-point numbers, and translates as needed. However, only IEEE 754 numbers with the same endianness as integer data in the same file have actually been observed in system files, and it is likely that other formats are obsolete or were never used.
System files use a few floating point values for special purposes:
-
SYSMISThe system-missing value is represented by the largest possible negative number in the floating point format (
-DBL_MAXorf64::MIN). -
HIGHESTHIGHESTis used as the high end of a missing value range with an unbounded maximum. It is represented by the largest possible positive number (DBL_MAXorf64::MAX). -
LOWESTLOWESTis used as the low end of a missing value range with an unbounded minimum. It was originally represented by the second-largest negative number (in IEEE 754 format,0xffeffffffffffffe). System files written by SPSS 21 and later instead use the largest negative number (-DBL_MAXorf64::MIN), the same value asSYSMIS. This does not lead to ambiguity becauseLOWESTappears in system files only in missing value ranges, which never containSYSMIS.
System files may use most character encodings based on an 8-bit unit.
UTF-16 and UTF-32, based on wider units, appear to be unacceptable.
rec_type in the file header record is sufficient to distinguish
between ASCII and EBCDIC based encodings. The best way to determine
the specific encoding in use is to consult the character encoding
record, if present, and failing that
character_code in the machine integer info
record. The same encoding should be
used for the dictionary and the data in the file, although it is
possible to artificially synthesize files that use different
encodings.
System File Record Structure
System files are divided into records with the following format:
int32 type;
char data[];
This header does not identify the length of the data or any
information about what it contains, so the system file reader must
understand the format of data based on type. However, records with
type 7, called “extension records”, have a stricter format:
int32 type;
int32 subtype;
int32 size;
int32 count;
char data[size * count];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. This value identifies a particular kind of extension record.
-
int32 size;The size of each piece of data that follows the header, in bytes. Known extension records use 1, 4, or 8, for
char,int32, andflt64format data, respectively. -
int32 count;The number of pieces of data that follow the header.
-
char data[size * count];Data, whose format and interpretation depend on the subtype.
An extension record contains exactly size * count bytes of data,
which allows a reader that does not understand an extension record to
skip it. Extension records provide only nonessential information, so
this allows for files written by newer software to preserve backward
compatibility with older or less capable readers.
Records in a system file must appear in the following order:
-
File header record.
-
Variable records.
-
All pairs of value labels records and value label variables records, if present.
-
Document record, if present.
-
Extension (type 7) records, in ascending numerical order of their subtypes.
System files written by SPSS include at most one of each kind of extension record. This is generally true of system files written by other software as well, with known exceptions noted below in the individual sections about each type of record.
-
Dictionary termination record.
-
Data record.
We advise authors of programs that read system files to tolerate format variations. Various kinds of misformatting and corruption have been observed in system files written by SPSS and other software alike. In particular, because extension records provide nonessential information, it is generally better to ignore an extension record entirely than to refuse to read a system file.
The following sections describe the known kinds of records.
File Header Record
A system file begins with the file header, with the following format:
char rec_type[4];
char prod_name[60];
int32 layout_code;
int32 nominal_case_size;
int32 compression;
int32 weight_index;
int32 ncases;
flt64 bias;
char creation_date[9];
char creation_time[8];
char file_label[64];
char padding[3];
-
char rec_type[4];Record type code, either
$FL2for system files with uncompressed data or data compressed with simple bytecode compression, or$FL3for system files with ZLIB compressed data.This is truly a character field that uses the character encoding as other strings. Thus, in a file with an ASCII-based character encoding this field contains
24 46 4c 32or24 46 4c 33, and in a file with an EBCDIC-based encoding this field contains5b c6 d3 f2. (No EBCDIC-based ZLIB-compressed files have been observed.) -
char prod_name[60];Product identification string. This always begins with the characters
@(#) SPSS DATA FILE. PSPP uses the remaining characters to give its version and the operating system name; for example,GNU pspp 0.1.4 - sparc-sun-solaris2.5.2. The string is truncated if it would be longer than 60 characters; otherwise it is padded on the right with spaces.The product name field allow readers to behave differently based on quirks in the way that particular software writes system files. See Value Labels Records, for the detail of the quirk that the PSPP system file reader tolerates in files written by ReadStat, which has
https://github.com/WizardMac/ReadStatinprod_name. -
int32 layout_code;Normally set to 2, although a few system files have been spotted in the wild with a value of 3 here. PSPP use this value to determine the file's integer endianness.
-
int32 nominal_case_size;Number of data elements per case. This is the number of variables, except that long string variables add extra data elements (one for every 8 characters after the first 8). However, string variables do not contribute to this value beyond the first 255 bytes. Further, some software always writes -1 or 0 in this field. In general, it is unsafe for systems reading system files to rely upon this value.
-
int32 compression;Set to 0 if the data in the file is not compressed, 1 if the data is compressed with simple bytecode compression, 2 if the data is ZLIB compressed. This field has value 2 if and only if
rec_typeis$FL3. -
int32 weight_index;If one of the variables in the data set is used as a weighting variable, set to the dictionary index of that variable. Otherwise, set to 0.
-
int32 ncases;Set to the number of cases in the file if it is known, or -1 otherwise.
In the general case it is not possible to determine the number of cases that will be output to a system file at the time that the header is written. The way that this is dealt with is by writing the entire system file, including the header, then seeking back to the beginning of the file and writing just the
ncasesfield. For files in which this is not valid, the seek operation fails. In this case,ncasesremains -1. -
flt64 bias;Compression bias, usually 100. Only integers between
1 - biasand251 - biascan be compressed.By assuming that its value is 100, PSPP uses
biasto determine the file's floating-point format and endianness. If the compression bias is not 100, PSPP cannot auto-detect the floating-point format and assumes that it is IEEE 754 format with the same endianness as the system file's integers, which is correct for all known system files. -
char creation_date[9];Date of creation of the system file, in
dd mmm yyformat, with the month as standard English abbreviations, using an initial capital letter and following with lowercase.Some files in the corpus have the date in
dd-mmm-yyformat. -
char creation_time[8];Time of creation of the system file, in
hh:mm:ssformat and using 24-hour time. -
char file_label[64];File label declared by the user, if any. Padded on the right with spaces.
A product that identifies itself as
VOXCO INTERVIEWER 4.3uses CR-only line ends in this field, rather than the more usual LF-only or CR LF line ends. -
char padding[3];Ignored padding bytes to make the structure a multiple of 32 bits in length. Set to zeros.
Variable Record
There must be one variable record for each numeric variable and each string variable with width 8 bytes or less. String variables wider than 8 bytes have one variable record for each 8 bytes, rounding up. The first variable record for a long string specifies the variable's correct dictionary information. Subsequent variable records for a long string are filled with dummy information: a type of -1, no variable label or missing values, print and write formats that are ignored, and an empty string as name. A few system files have been encountered that include a variable label on dummy variable records, so readers should take care to parse dummy variable records in the same way as other variable records.
The "dictionary index" of a variable is a 1-based offset in the set of variable records, including dummy variable records for long string variables. The first variable record has a dictionary index of 1, the second has a dictionary index of 2, and so on.
The system file format does not directly support string variables wider than 255 bytes. Such very long string variables are represented by a number of narrower string variables. See very long string record for details.
A system file should contain at least one variable and thus at least one variable record, but system files have been observed in the wild without any variables (thus, no data either).
int32 rec_type;
int32 type;
int32 has_var_label;
int32 n_missing_values;
int32 print;
int32 write;
char name[8];
/* Present only if `has_var_label` is 1. */
int32 label_len;
char label[];
/* Present only if `n_missing_values` is nonzero. */
flt64 missing_values[];
-
int32 rec_type;Record type code. Always set to 2.
-
int32 type;Variable type code. Set to 0 for a numeric variable. For a short string variable or the first part of a long string variable, this is set to the width of the string. For the second and subsequent parts of a long string variable, set to -1, and the remaining fields in the structure are ignored.
-
int32 has_var_label;If this variable has a variable label, set to 1; otherwise, set to 0.
-
int32 n_missing_values;If the variable has no missing values, set to 0. If the variable has one, two, or three discrete missing values, set to 1, 2, or 3, respectively. If the variable has a range for missing variables, set to -2; if the variable has a range for missing variables plus a single discrete value, set to -3.
A long string variable always has the value 0 here. A separate record indicates missing values for long string variables.
-
int32 print;Print format for this variable. See below.
-
int32 write;Write format for this variable. See below.
-
char name[8];Variable name. The variable name must begin with a capital letter or the at-sign (
@). Subsequent characters may also be digits, octothorpes (#), dollar signs ($), underscores (_), or full stops (.). The variable name is padded on the right with spaces.The
namefields should be unique within a system file. System files written by SPSS that contain very long string variables with similar names sometimes contain duplicate names that are later eliminated by resolving the very long string names. PSPP handles duplicates by assigning them new, unique names. -
int32 label_len;This field is present only if
has_var_labelis set to 1. It is set to the length, in characters, of the variable label. The documented maximum length varies from 120 to 255 based on SPSS version, but some files have been seen with longer labels. PSPP accepts labels of any length. -
char label[];This field is present only if
has_var_labelis set to 1. It has lengthlabel_len, rounded up to the nearest multiple of 32 bits. The firstlabel_lencharacters are the variable's variable label. -
flt64 missing_values[];This field is present only if
n_missing_valuesis nonzero. It has the same number of 8-byte elements as the absolute value ofn_missing_values. Each element is interpreted as a number for numeric variables (withHIGHESTandLOWESTindicated as described in the introduction). For string variables of width less than 8 bytes, elements are right-padded with spaces.For discrete missing values, each element represents one missing value. When a range is present, the first element denotes the minimum value in the range, and the second element denotes the maximum value in the range. When a range plus a value are present, the third element denotes the additional discrete missing value.
Format Types
The print and write members of sysfile_variable are output
formats coded into int32 types. The least-significant byte of the
int32 represents the number of decimal places, and the next two bytes
in order of increasing significance represent field width and format
type, respectively. The most-significant byte is not used and should be
set to zero.
Format types are defined as follows:
| Value | Meaning |
|---|---|
| 0 | Not used. |
| 1 | A |
| 2 | AHEX |
| 3 | COMMA |
| 4 | DOLLAR |
| 5 | F |
| 6 | IB |
| 7 | PIBHEX |
| 8 | P |
| 9 | PIB |
| 10 | PK |
| 11 | RB |
| 12 | RBHEX |
| 13 | Not used. |
| 14 | Not used. |
| 15 | Z |
| 16 | N |
| 17 | E |
| 18 | Not used. |
| 19 | Not used. |
| 20 | DATE |
| 21 | TIME |
| 22 | DATETIME |
| 23 | ADATE |
| 24 | JDATE |
| 25 | DTIME |
| 26 | WKDAY |
| 27 | MONTH |
| 28 | MOYR |
| 29 | QYR |
| 30 | WKYR |
| 31 | PCT |
| 32 | DOT |
| 33 | CCA |
| 34 | CCB |
| 35 | CCC |
| 36 | CCD |
| 37 | CCE |
| 38 | EDATE |
| 39 | SDATE |
| 40 | MTIME |
| 41 | YMDHMS |
A few system files have been observed in the wild with invalid
write fields, in particular with value 0. Readers should probably
treat invalid print or write fields as some default format.
Obsolete Treatment of Long String Missing Values
SPSS and most versions of PSPP write missing values for string variables wider than 8 bytes with a Long String Missing Values Record. Very old versions of PSPP instead wrote these missing values on the variables record, writing only the first 8 bytes of each missing value, with the remainder implicitly all spaces. Any new software should use the Long String Missing Values Record, but it might possibly be worthwhile also to accept the old format used by PSPP.
Value Labels Records
The value label records documented in this section are used for numeric and short string variables only. Long string variables may have value labels, but their value labels are recorded using a different record type.
ReadStat writes value labels that label a
single value more than once. In more detail, it emits value labels
whose values are longer than string variables' widths, that are
identical in the actual width of the variable, e.g. labels for values
ABC123 and ABC456 for a string variable with width 3. For files
written by this software, PSPP ignores such labels.
Value Label Record for Labels
The value label record has the following format:
int32 rec_type;
int32 label_count;
/* Repeated `n_label` times. */
char value[8];
char label_len;
char label[];
-
int32 rec_type;Record type. Always set to 3.
-
int32 label_count;Number of value labels present in this record.
The remaining fields are repeated count times. Each repetition
specifies one value label.
-
char value[8];A numeric value or a short string value padded as necessary to 8 bytes in length. Its type and width cannot be determined until the following value label variables record (see below) is read.
-
char label_len;The label's length, in bytes. The documented maximum length varies from 60 to 120 based on SPSS version. PSPP supports value labels up to 255 bytes long.
-
char label[];label_lenbytes of the actual label, followed by up to 7 bytes of padding to bringlabelandlabel_lentogether to a multiple of 8 bytes in length.
Value Label Record for Variables
The value label record is always immediately followed by a value label variables record with the following format:
int32 rec_type;
int32 var_count;
int32 vars[];
-
int32 rec_type;Record type. Always set to 4.
-
int32 var_count;Number of variables that the associated value labels from the value label record are to be applied.
-
int32 vars[];A list of 1-based dictionary indexes of variables to which to apply the value labels. There are
var_countelements.String variables wider than 8 bytes may not be specified in this list.
Document Record
The document record, if present, has the following format:
int32 rec_type;
int32 n_lines;
char lines[][80];
-
int32 rec_type;Record type. Always set to 6.
-
int32 n_lines;Number of lines of documents present. This should be greater than zero, but ReadStats writes system files with zero
n_lines. -
char lines[][80];Document lines. The number of elements is defined by
n_lines. Lines shorter than 80 characters are padded on the right with spaces.
Machine Integer Info Record
The integer info record, if present, has the following format:
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Data. */
int32 version_major;
int32 version_minor;
int32 version_revision;
int32 machine_code;
int32 floating_point_rep;
int32 compression_code;
int32 endianness;
int32 character_code;
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 3.
-
int32 size;Size of each piece of data in the data part, in bytes. Always set to 4.
-
int32 count;Number of pieces of data in the data part. Always set to 8.
-
int32 version_major;PSPP major version number. In version X.Y.Z, this is X.
-
int32 version_minor;PSPP minor version number. In version X.Y.Z, this is Y.
-
int32 version_revision;PSPP version revision number. In version X.Y.Z, this is Z.
-
int32 machine_code;Machine code. PSPP always set this field to value to -1, but other values may appear.
-
int32 floating_point_rep;Floating point representation code. For IEEE 754 systems (the most common) this is 1. IBM 370 is supposed to set this to 2, and DEC VAX E to 3, but neither of these has been observed.
-
int32 compression_code;Compression code. Always set to 1, regardless of whether or how the file is compressed.
-
int32 endianness;Machine endianness. 1 indicates big-endian, 2 indicates little-endian.
-
int32 character_code;Character code. The following values have been actually observed in system files:
-
1
EBCDIC. Only one example has been observed.
-
2
7-bit ASCII. Old versions of SPSS for Unix and Windows always wrote value 2 in this field, regardless of the encoding in use, so it is not reliable and should be ignored.
-
3
8-bit "ASCII".
-
819
ISO 8859-1 (IBM AIX code page number).
-
874
9066The
windows-874code page for Thai. -
932
The
windows-932code page for Japanese (akaShift_JIS). -
936
The
windows-936code page for simplified Chinese (akaGBK). -
949
Probably
ks_c_5601-1987, Unified Hangul Code. -
950
The
big5code page for traditional Chinese. -
1250
The
windows-1250code page for Central European and Eastern European languages. -
1251
The
windows-1251code page for Cyrillic languages. -
1252
The
windows-1252code page for Western European languages. -
1253
The
windows-1253code page for modern Greek. -
1254
The
windows-1254code page for Turkish. -
1255
The
windows-1255code page for Hebrew. -
1256
The
windows-1256code page for Arabic script. -
1257
The
windows-1257code page for Estonian, Latvian, and Lithuanian. -
1258
The
windows-1258code page for Vietnamese. -
20127
US-ASCII.
-
28591
ISO 8859-1 (Latin-1).
-
25592
ISO 8859-2 (Central European).
-
28605
ISO 8895-9 (Latin-9).
-
51949
The
euc-krcode page for Korean. -
65001
UTF-8.
The following additional values are known to be defined:
-
3
8-bit "ASCII".
-
4
DEC Kanji.
The most common values observed, from most to least common, are 1252, 65001, 2, and 28591.
Other Windows code page numbers are known to be generally valid.
Newer versions also write the character encoding as a string.
-
Machine Floating-Point Info Record
The floating-point info record, if present, has the following format:
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Data. */
flt64 sysmis;
flt64 highest;
flt64 lowest;
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 4.
-
int32 size;Size of each piece of data in the data part, in bytes. Always set to 8.
-
int32 count;Number of pieces of data in the data part. Always set to 3.
-
flt64 sysmis;
flt64 highest;
flt64 lowest;The system missing value, the value used for
HIGHESTin missing values, and the value used forLOWESTin missing values, respectively. See the introduction for more information.The SPSSWriter library in PHP, which identifies itself as
FOM SPSS 1.0.0in the file header recordprod_namefield, writes unexpected values to these fields, but it uses the same values consistently throughout the rest of the file.
Multiple Response Sets Records
The system file format has two different types of records that
represent multiple response sets. The first type of record describes
multiple response sets that can be understood by SPSS before
version 14. The second type of record, with a closely related format,
is used for multiple dichotomy sets that use the
CATEGORYLABELS=COUNTEDVALUES feature added in version 14.
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Exactly `count` bytes of data. */
char mrsets[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Set to 7 for records that describe multiple response sets understood by SPSS before version 14, or to 19 for records that describe dichotomy sets that use the
CATEGORYLABELS=COUNTEDVALUESfeature added in version 14. -
int32 size;The size of each element in the
mrsetsmember. Always set to 1. -
int32 count;The total number of bytes in
mrsets. -
char mrsets[];Zero or more line feeds (byte 0x0a), followed by a series of multiple response sets, each of which consists of the following:
-
The set's name (an identifier that begins with
$), in mixed upper and lower case. -
An equals sign (
=). -
Cfor a multiple category set,Dfor a multiple dichotomy set withCATEGORYLABELS=VARLABELS, orEfor a multiple dichotomy set withCATEGORYLABELS=COUNTEDVALUES. -
For a multiple dichotomy set with
CATEGORYLABELS=COUNTEDVALUES, a space, followed by a number expressed as decimal digits, followed by a space. IfLABELSOURCE=VARLABELwas specified on MRSETS, then the number is 11; otherwise it is 1.1 -
For either kind of multiple dichotomy set, the counted value, as a positive integer count specified as decimal digits, followed by a space, followed by as many string bytes as specified in the count. If the set contains numeric variables, the string consists of the counted integer value expressed as decimal digits. If the set contains string variables, the string contains the counted string value. Either way, the string may be padded on the right with spaces (older versions of SPSS seem to always pad to a width of 8 bytes; newer versions don't).
-
A space.
-
The multiple response set's label, using the same format as for the counted value for multiple dichotomy sets. A string of length 0 means that the set does not have a label. A string of length 0 is also written if LABELSOURCE=VARLABEL was specified.
-
The short names of the variables in the set, converted to lowercase, each preceded by a single space.
Even though a multiple response set must have at least two variables, some system files contain multiple response sets with no variables or one variable. The source and meaning of these multiple response sets is unknown. (Perhaps they arise from creating a multiple response set then deleting all the variables that it contains?)
-
One line feed (byte 0x0a). Sometimes multiple, even hundreds, of line feeds are present.
-
Example: Given appropriate variable definitions, consider the following MRSETS command:
MRSETS /MCGROUP NAME=$a LABEL='my mcgroup' VARIABLES=a b c /MDGROUP NAME=$b VARIABLES=g e f d VALUE=55 /MDGROUP NAME=$c LABEL='mdgroup #2' VARIABLES=h i j VALUE='Yes' /MDGROUP NAME=$d LABEL='third mdgroup' CATEGORYLABELS=COUNTEDVALUES VARIABLES=k l m VALUE=34 /MDGROUP NAME=$e CATEGORYLABELS=COUNTEDVALUES LABELSOURCE=VARLABEL VARIABLES=n o p VALUE='choice'.The above would generate the following multiple response set record of subtype 7:
$a=C 10 my mcgroup a b c $b=D2 55 0 g e f d $c=D3 Yes 10 mdgroup #2 h i jIt would also generate the following multiple response set record with subtype 19:
$d=E 1 2 34 13 third mdgroup k l m $e=E 11 6 choice 0 n o p
Extra Product Info Record
This optional record appears to contain a text string that describes the program that wrote the file and the source of the data. (This is redundant with the file label and product info found in the file header record.)
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Exactly `count` bytes of data. */
char info[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 10.
-
int32 size;The size of each element in the
infomember. Always set to 1. -
int32 count;The total number of bytes in
info. -
char info[];A text string. A product that identifies itself as
VOXCO INTERVIEWER 4.3uses CR-only line ends in this field, rather than the more usual LF-only or CR LF line ends.
Variable Display Parameter Record
The variable display parameter record, if present, has the following format:
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Repeated `count` times. */
int32 measure;
int32 width; /* Not always present. */
int32 alignment;
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 11.
-
int32 size;The size of
int32. Always set to 4. -
int32 count;The number of sets of variable display parameters (ordinarily the number of variables in the dictionary), times 2 or 3.
The remaining members are repeated count times, in the same order as
the variable records. No element corresponds to variable records that
continue long string variables. The meanings of these members are as
follows:
-
int32 measure;The measurement level of the variable:
Value Level 0 Unknown 1 Nominal 2 Ordinal 3 Scale An "unknown"
measureof 0 means that the variable was created in some way that doesn't make the measurement level clear, e.g. with aCOMPUTEtransformation. PSPP sets the measurement level the first time it reads the data, so this should rarely appear. -
int32 width;The width of the display column for the variable in characters.
This field is present if
countis 3 times the number of variables in the dictionary. It is omitted ifcountis 2 times the number of variables. -
int32 alignment;The alignment of the variable for display purposes:
Value Alignment 0 Left aligned 1 Right aligned 2 Centre aligned
Variable Sets Record
The SPSS GUI offers users the ability to arrange variables in sets. Users may enable and disable sets individually, and the data editor and analysis dialog boxes only show enabled sets. Syntax does not use variable sets.
The variable sets record, if present, has the following format:
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Exactly `count` bytes of text. */
char text[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 5.
-
int32 size;Always set to 1.
-
int32 count;The total number of bytes in
text. -
char text[];The variable sets, in a text-based format.
Each variable set occupies one line of text, each of which ends with a line feed (byte 0x0a), optionally preceded by a carriage return (byte 0x0d).
Each line begins with the name of the variable set, followed by an equals sign (
=) and a space (byte 0x20), followed by the long variable names of the members of the set, separated by spaces. A variable set may be empty, in which case the equals sign and the space following it are still present.
Long Variable Names Record
If present, the long variable names record has the following format:
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Exactly `count` bytes of data. */
char var_name_pairs[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 13.
-
int32 size;The size of each element in the
var_name_pairsmember. Always set to 1. -
int32 count;The total number of bytes in
var_name_pairs. -
char var_name_pairs[];A list of key-value tuples, where each key is the name of a variable, and the value is its long variable name. The key field is at most 8 bytes long and must match the name of a variable which appears in the variable record. The value field is at most 64 bytes long. The key and value fields are separated by a
=byte. Each tuple is separated by a byte whose value is 09. There is no trailing separator following the last tuple. The total length iscountbytes.
Very Long String Record
Old versions of SPSS limited string variables to a width of 255 bytes. For backward compatibility with these older versions, the system file format represents a string longer than 255 bytes, called a “very long string”, as a collection of strings no longer than 255 bytes each. The strings concatenated to make a very long string are called its “segments”; for consistency, variables other than very long strings are considered to have a single segment.
A very long string with a width of w has n = (w + 251) / 252
segments, that is, one segment for every 252 bytes of width, rounding
up. It would be logical, then, for each of the segments except the
last to have a width of 252 and the last segment to have the
remainder, but this is not the case. In fact, each segment except the
last has a width of 255 bytes. The last has width w - (n - 1) * 252; some versions of SPSS make it slightly wider, but not wide
enough to make the last segment require another 8 bytes of data.
Data is packed tightly into segments of a very long string, 255 bytes per segment. Because 255 bytes of segment data are allocated for every 252 bytes of the very long string's width (approximately), some unused space is left over at the end of the allocated segments. Data in unused space is ignored.
Example: Consider a very long string of width 20,000. Such a very long string has 20,000 / 252 = 80 (rounding up) segments. The first 79 segments have width 255; the last segment has width 20,000 - 79 * 252 = 92 or slightly wider (up to 96 bytes, the next multiple of 8). The very long string's data is actually stored in the 19,890 bytes in the first 78 segments, plus the first 110 bytes of the 79th segment (19,890 + 110 = 20,000). The remaining 145 bytes of the 79th segment and all 92 bytes of the 80th segment are unused.
The very long string record explains how to stitch together segments to obtain very long string data. For each of the very long string variables in the dictionary, it specifies the name of its first segment's variable and the very long string variable's actual width. The remaining segments immediately follow the named variable in the system file's dictionary.
The very long string record, which is present only if the system file contains very long string variables, has the following format:
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Exactly `count` bytes of data. */
char string_lengths[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 14.
-
int32 size;The size of each element in the
string_lengthsmember. Always set to 1. -
int32 count;The total number of bytes in
string_lengths. -
char string_lengths[];A list of key-value tuples, where key is the name of a variable, and value is its length. The key field is at most 8 bytes long and must match the name of a variable which appears in the variable record. The value field is exactly 5 bytes long. It is a zero-padded, ASCII-encoded string that is the length of the variable. The key and value fields are separated by a
=byte. Tuples are delimited by a two-byte sequence {00, 09}. After the last tuple, there may be a single byte 00, or {00, 09}. The total length iscountbytes.
Character Encoding Record
This record, if present, indicates the character encoding for string data, long variable names, variable labels, value labels and other strings in the file.
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Exactly `count` bytes of data. */
char encoding[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 20.
-
int32 size;The size of each element in the
encodingmember. Always set to 1. -
int32 count;The total number of bytes in
encoding. -
char encoding[];The name of the character encoding. Normally this will be an official IANA character set name or alias. Character set names are not case-sensitive, and SPSS is not consistent, e.g. both
windows-1251andWINDOWS-1252have both been observed, as haveBig5andBIG5.
This record is not present in files generated by older software. See
also character_code in the machine integer info
record.
The following character encoding names have been observed. The names
are shown in lowercase, even though they were not always in lowercase in
the file. Alternative names for the same encoding are, when known,
listed together. For each encoding, the character_code values that
they were observed paired with are also listed. First, the following
are strictly single-byte, ASCII-compatible encodings:
-
(encoding record missing)
0, 2, 3, 874, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 20127, 28591, 28592, 28605
-
ansi_x3.4-1968
ascii1252
-
cp286052
-
cp8749066
-
iso-8859-1819
-
windows-874874
-
windows-12502, 1250, 1252
-
windows-12512, 1251
-
cp1252
windows-12522, 1250, 1252, 1253
-
cp1253
windows-12531253
-
windows-12542, 1254
-
windows-12552, 1255
-
windows-12562, 1252, 1256
-
windows-12572, 1257
-
windows-12581258
The others are multibyte encodings, in which some code points occupy a single byte and others multiple bytes. The following multibyte encodings are "ASCII compatible," that is, they use ASCII values only to indicate ASCII:
-
(encoding record missing)
65001, 949
-
euc-kr2, 51949
-
utf-80, 2, 1250, 1251, 1252, 1256, 65001
The following multibyte encodings are not ASCII compatible, that is, while they encode ASCII characters as their native values, they also use ASCII values as second or later bytes in multibyte sequences:
-
(encoding record missing)
932, 936, 950
-
big5
cp9502, 950
-
gbk936
-
cp932
windows-31j932
As the tables above show, when the character encoding record and the machine integer info record are both present, they can contradict each other. Observations show that, in this case, the character encoding record should be honored.
If, for testing purposes, a file is crafted with different
character_code and encoding, it seems that character_code
controls the encoding for all strings in the system file before the
dictionary termination record, including strings in data (e.g. string
missing values), and encoding controls the encoding for strings
following the dictionary termination record.
Long String Value Labels Record
This record, if present, specifies value labels for long string variables.
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Repeated up to exactly `count` bytes. */
int32 var_name_len;
char var_name[];
int32 var_width;
int32 n_labels;
long_string_label labels[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 21.
-
int32 size;Always set to 1.
-
int32 count;The number of bytes following the header until the next header.
-
int32 var_name_len;
char var_name[];The number of bytes in the name of the variable that has long string value labels, plus the variable name itself, which consists of exactly
var_name_lenbytes. The variable name is not padded to any particular boundary, nor is it null-terminated. -
int32 var_width;The width of the variable, in bytes, which will be between 9 and 32767.
-
int32 n_labels;
long_string_label labels[];The long string labels themselves. The
labelsarray contains exactlyn_labelselements, each of which has the following substructure:int32 value_len; char value[]; int32 label_len; char label[];-
int32 value_len;
char value[];The string value being labeled.
value_lenis the number of bytes invalue; it is equal tovar_width. Thevaluearray is not padded or null-terminated. -
int32 label_len;
char label[];The label for the string value.
label_len, which must be between 0 and 120, is the number of bytes inlabel. Thelabelarray is not padded or null-terminated.
-
Long String Missing Values Record
This record, if present, specifies missing values for long string variables.
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Repeated up to exactly `count` bytes. */
int32 var_name_len;
char var_name[];
char n_missing_values;
int32 value_len;
char values[value_len * n_missing_values];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 22.
-
int32 size;Always set to 1.
-
int32 count;The number of bytes following the header until the next header.
-
int32 var_name_len;
char var_name[];The number of bytes in the name of the long string variable that has missing values, plus the variable name itself, which consists of exactly
var_name_lenbytes. The variable name is not padded to any particular boundary, nor is it null-terminated. -
char n_missing_values;The number of missing values, either 1, 2, or 3. (This is, unusually, a single byte instead of a 32-bit number.)
-
int32 value_len;The length of each missing value string, in bytes. This value should be 8, because long string variables are at least 8 bytes wide (by definition), only the first 8 bytes of a long string variable's missing values are allowed to be non-spaces, and any spaces within the first 8 bytes are included in the missing value here.
-
char values[value_len * n_missing_values]The missing values themselves, without any padding or null terminators.
An earlier version of this document stated that value_len was
repeated before each of the missing values, so that there was an extra
int32 value of 8 before each missing value after the first. Old
versions of PSPP wrote data files in this format. Readers can tolerate
this mistake, if they wish, by noticing and skipping the extra int32
values, which wouldn't ordinarily occur in strings.
Data File and Variable Attributes Records
The data file and variable attributes records represent custom
attributes for the system file or for individual variables in the
system file, as defined on the DATAFILE ATTRIBUTE and VARIABLE ATTRIBUTE commands, respectively.
/* Header. */
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
/* Exactly `count` bytes of data. */
char attributes[];
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 17 for a data file attribute record or to 18 for a variable attributes record.
-
int32 size;The size of each element in the
attributesmember. Always set to value 1. -
int32 count;The total number of bytes in
attributes. -
char attributes[];The attributes, in a text-based format.
In record subtype 17, this field contains a single attribute set. An attribute set is a sequence of one or more attributes concatenated together. Each attribute consists of a name, which has the same syntax as a variable name, followed by, inside parentheses, a sequence of one or more values. Each value consists of a string enclosed in single quotes (
') followed by a line feed (byte 0x0a). A value may contain single quote characters, which are not themselves escaped or quoted or required to be present in pairs. There is no apparent way to embed a line feed in a value. There is no distinction between an attribute with a single value and an attribute array with one element.In record subtype 18, this field contains a sequence of one or more variable attribute sets. If more than one variable attribute set is present, each one after the first is delimited from the previous by
/. Each variable attribute set consists of a long variable name, followed by:, followed by an attribute set with the same syntax as on record subtype 17.System files written by
Stata 14.1/-savespss- 1.77 by S.Radyakinmay include multiple records with subtype 18, one per variable that has variable attributes.The total length is
countbytes.
Example: A system file produced with the following
VARIABLE ATTRIBUTEcommands in effect:VARIABLE ATTRIBUTE VARIABLES=dummy ATTRIBUTE=fred[1]('23') fred[2]('34'). VARIABLE ATTRIBUTE VARIABLES=dummy ATTRIBUTE=bert('123').will contain a variable attribute record with the following contents:
0000 07 00 00 00 12 00 00 00 01 00 00 00 22 00 00 00 |............"...| 0010 64 75 6d 6d 79 3a 66 72 65 64 28 27 32 33 27 0a |dummy:fred('23'.| 0020 27 33 34 27 0a 29 62 65 72 74 28 27 31 32 33 27 |'34'.)bert('123'| 0030 0a 29 |.) |
Variable Roles
A variable's role is represented as an attribute named $@Role. This
attribute has a single element whose values and their meanings are:
| Value | Role |
|---|---|
| 0 | Input |
| 1 | Target |
| 2 | Both |
| 3 | None |
| 4 | Partition |
| 5 | Split |
The default and most common role is 0 (input).
Extended Number of Cases Record
ncases in the file header record expresses
the number of cases in the system file as an int32. This record
allows the number of cases in the system file to be expressed as a
64-bit number.
int32 rec_type;
int32 subtype;
int32 size;
int32 count;
int64 unknown;
int64 ncases64;
-
int32 rec_type;Record type. Always set to 7.
-
int32 subtype;Record subtype. Always set to 16.
-
int32 size;Size of each element. Always set to 8.
-
int32 count;Number of pieces of data in the data part. Alway set to 2.
-
int64 unknown;Meaning unknown. Always set to 1.
-
int64 ncases64;Number of cases in the file as a 64-bit integer. Presumably this could be -1 to indicate that the number of cases is unknown, for the same reason as
ncasesin the file header record, but this has not been observed in the wild.
Other Informational Records
This chapter documents many specific types of extension records are documented here, but others are known to exist. PSPP ignores unknown extension records when reading system files.
The following extension record subtypes have also been observed, with the following believed meanings:
-
6
Date info, probably related to USE (according to Aapi Hämäläinen).
-
12
A UUID in the format described in RFC 4122. Only two examples observed, both written by SPSS 13, and in each case the UUID contained both upper and lower case.
-
24
XML that describes how data in the file should be displayed on-screen.
Dictionary Termination Record
The dictionary termination record separates all other records from the data records.
int32 rec_type;
int32 filler;
-
int32 rec_type;Record type. Always set to 999.
-
int32 filler;Ignored padding. Should be set to 0.
Data Record
The data record must follow all other records in the system file. Every
system file must have a data record that specifies data for at least one
case. The format of the data record varies depending on the value of
compression in the file header record:
-
0: no compression
Data is arranged as a series of 8-byte elements. Each element corresponds to the variable declared in the respective variable record. Numeric values are given in
flt64format; string values are literal characters string, padded on the right when necessary to fill out 8-byte units. -
1: bytecode compression
The first 8 bytes of the data record is divided into a series of 1-byte command codes. These codes have meanings as described below:
-
0
Ignored. If the program writing the system file accumulates compressed data in blocks of fixed length, 0 bytes can be used to pad out extra bytes remaining at the end of a fixed-size block.
-
1 through 251
A number with value
code - bias, wherecodeis the value of the compression code andbiascomes from the file header.Example: Code 105 with bias 100.0 (the normal value) indicates a numeric variable of value 5.
A code of 0 (after subtracting the bias) in a string field encodes null bytes. This is unusual, since a string field normally encodes text data, but it exists in real system files.
-
252
End of file. This code may or may not appear at the end of the data stream. PSPP always outputs this code but its use is not required.
-
253
A numeric or string value that is not compressible. The value is stored in the 8 bytes following the current block of command bytes. If this value appears twice in a block of command bytes, then it indicates the second group of 8 bytes following the command bytes, and so on.
-
254
An 8-byte string value that is all spaces.
-
255
The system-missing value.
The end of the 8-byte group of bytecodes is followed by any 8-byte blocks of non-compressible values indicated by code 253. After that follows another 8-byte group of bytecodes, then those bytecodes' non-compressible values. The pattern repeats to the end of the file or a code with value 252.
-
-
2: ZLIB compression
The data record consists of the following, in order:
-
ZLIB data header, 24 bytes long.
-
One or more variable-length blocks of ZLIB compressed data.
-
ZLIB data trailer, with a 24-byte fixed header plus an additional 24 bytes for each preceding ZLIB compressed data block.
The ZLIB data header has the following format:
int64 zheader_ofs; int64 ztrailer_ofs; int64 ztrailer_len;-
int64 zheader_ofs;The offset, in bytes, of the beginning of this structure within the system file. A reader does not need to use this, so it can ignore it (PSPP issues a warning if it does not match its own offset).
-
int64 ztrailer_ofs;The offset, in bytes, of the first byte of the ZLIB data trailer.
-
int64 ztrailer_len;The number of bytes in the ZLIB data trailer. This and the previous field sum to the size of the system file in bytes.
The data header is followed by
(ztrailer_len - 24) / 24ZLIB compressed data blocks. Each ZLIB compressed data block begins with a ZLIB header as specified in RFC 1950, e.g. hex bytes78 01(the only header yet observed in practice). Each block decompresses to a fixed number of bytes (in practice only0x3ff000-byte blocks have been observed), except that the last block of data may be shorter. The last ZLIB compressed data block gends just before offsetztrailer_ofs.The result of ZLIB decompression is bytecode compressed data as described above for compression format 1.
The ZLIB data trailer begins with the following 24-byte fixed header:
int64 bias; int64 zero; int32 block_size; int32 n_blocks;-
int64 int_bias;The compression bias as a negative integer, e.g. if
biasin the file header record is 100.0, thenint_biasis −100 (this is the only value yet observed in practice). -
int64 zero;Always observed to be zero.
-
int32 block_size;The number of bytes in each ZLIB compressed data block, except possibly the last, following decompression. Only
0x3ff000has been observed so far. -
int32 n_blocks;The number of ZLIB compressed data blocks, always exactly
(ztrailer_len - 24) / 24.
The fixed header is followed by
n_blocks24-byte ZLIB data block descriptors, each of which describes the compressed data block corresponding to its offset. Each block descriptor has the following format:int64 uncompressed_ofs; int64 compressed_ofs; int32 uncompressed_size; int32 compressed_size;-
int64 uncompressed_ofs;The offset, in bytes, that this block of data would have in a similar system file that uses compression format 1. This is
zheader_ofsin the first block descriptor, and in each succeeding block descriptor it is the sum of the previous desciptor'suncompressed_ofsanduncompressed_size. -
int64 compressed_ofs;The offset, in bytes, of the actual beginning of this compressed data block. This is
zheader_ofs + 24in the first block descriptor, and in each succeeding block descriptor it is the sum of the previous descriptor'scompressed_ofsandcompressed_size. The final block descriptor'scompressed_ofsandcompressed_sizesum toztrailer_ofs. -
int32 uncompressed_size;The number of bytes in this data block, after decompression. This is
block_sizein every data block except the last, which may be smaller. -
int32 compressed_size;The number of bytes in this data block, as stored compressed in this system file.
-
-
This part of the format may not be fully understood, because only a single example of each possibility has been examined. ↩
SPSS Viewer File Format
SPSS Viewer or .spv files, here called SPV files, are written by SPSS
16 and later to represent the contents of its output editor. This
chapter documents the format, based on examination of a corpus of about
8,000 files from a variety of sources. This description is detailed
enough to both read and write SPV files.
SPSS 15 and earlier versions instead use .spo files, which have a
completely different output format based on the Microsoft Compound
Document Format. This format is not documented here.
An SPV file is a Zip archive that can be read with zipinfo and
unzip and similar programs. The final member in the Zip archive is
the "manifest", a file named META-INF/MANIFEST.MF. This structure
makes SPV files resemble Java "JAR" files (and ODF files), but whereas a
JAR manifest contains a sequence of colon-delimited key/value pairs, an
SPV manifest contains the string allowPivoting=true, without a
new-line. PSPP uses this string to identify an SPV file; it is
invariant across the corpus.
SPV files always begin with the 7-byte sequence 50 4b 03 04 14 00 08, but this is not a useful magic number because most Zip archives start the same way.
Checking only for the presence of
META-INF/MANIFEST.MFis also not a useful magic number because this file name also appears in every Java JAR archive.SPSS writes
META-INF/MANIFEST.MFto every SPV file, but it does not read it or even require it to exist, so using different contents, e.g.allowPivoting=false, has no effect.
The rest of the members in an SPV file's Zip archive fall into two
categories: "structure" and "detail" members. Structure member names
take the form with outputViewerNUMBER.xml or
outputViewerNUMBER_heading.xml, where NUMBER is an 10-digit decimal
number. Each of these members represents some kind of output item (a
table, a heading, a block of text, etc.) or a group of them. The
member whose output goes at the beginning of the document is numbered
0, the next member in the output is numbered 1, and so on.
Structure members contain XML. This XML is sometimes self-contained, but it often references detail members in the Zip archive, which are named as follows:
-
PREFIX_table.xmlandPREFIX_tableData.bin
PREFIX_lightTableData.bin
The structure of a table plus its data. Older SPV files pair aPREFIX_table.xmllegacy detail XML member that describes the table's structure with aPREFIX_tableData.binlegacy detail binary member that gives its data. Newer SPV files (the majority of those in the corpus) instead include a singlePREFIX_lightTableData.binlight detail binary member that incorporates both into a single binary format. -
PREFIX_warning.xmlandPREFIX_warningData.bin
PREFIX_lightWarningData.bin
Same format used for tables, with a different name. -
PREFIX_notes.xmlandPREFIX_notesData.bin
PREFIX_lightNotesData.bin
Same format used for tables, with a different name. -
PREFIX_chartData.binandPREFIX_chart.xml
The structure of a chart plus its data. Charts do not have a "light" format. -
PREFIX_Imagegeneric.png
PREFIX_PastedObjectgeneric.png
PREFIX_imageData.bin
A PNG image referenced by anobjectelement (in the first two cases) or animageelement (in the final case). See TheobjectandimageElements, for details. -
PREFIX_pmml.scf
PREFIX_stats.scf
PREFIX_model.xml
Not yet investigated. The corpus contains few examples.
The PREFIX in the names of the detail members is typically an
11-digit decimal number that increases for each item, tending to skip
values. Older SPV files use different naming conventions for detail
members. Structure member refer to detail members by name, and so
their exact names do not matter to readers as long as they are unique.
SPSS tolerates corrupted Zip archives that Zip reader libraries tend
to reject. These can be fixed up with zip -FF.
Structure Member Format
A structure member lays out the high-level structure for a group of output items such as heading, tables, and charts. Structure members do not include the details of tables and charts but instead refer to them by their member names.
Structure members' XML files claim conformance with a collection of XML Schemas. These schemas are distributed, under a nonfree license, with SPSS binaries. Fortunately, the schemas are not necessary to understand the structure members. The schemas can even be deceptive because they document elements and attributes that are not in the corpus and do not document elements and attributes that are commonly found in the corpus.
Structure members use a different XML namespace for each schema, but
these namespaces are not entirely consistent. In some SPV files, for
example, the viewer-tree schema is associated with namespace
http://xml.spss.com/spss/viewer-tree and in others with
http://xml.spss.com/spss/viewer/viewer-tree (note the additional
viewer/). Under either name, the schema URIs are not resolvable to
obtain the schemas themselves.
One may ignore all of the above in interpreting a structure member.
The actual XML has a simple and straightforward form that does not
require a reader to take schemas or namespaces into account. A
structure member's root is heading element, which contains heading
or container elements (or a mix), forming a tree. In turn,
container holds a label and one more child, usually text or
table.
- Grammar
- The
headingElement - The
labelElement - The
containerElement - The
textElement (Insidecontainer) - The
htmlElement - The
tableElement - The
graphElement - The
modelElement - The
objectandimageElements - The
treeElement - Path Elements
- The
pageSetupElement - The
textElement (InsidepageParagraph)
Grammar
The following sections document the elements found in structure
members in a context-free grammar-like fashion. Consider the following
example, which specifies the attributes and content for the container
element:
container
:visibility=(visible | hidden)
:page-break-before=(always)?
:text-align=(left | center)?
:width=dimension
=> label (table | container_text | graph | model | object | image | tree)
Each attribute specification begins with : followed by the
attribute's name. If the attribute's value has an easily specified
form, then = and its description follows the name. Finally, if the
attribute is optional, the specification ends with ?. The following
value specifications are defined:
-
(A | B | ...)
One of the listed literal strings. If only one string is listed, it is the only acceptable value. IfOTHERis listed, then any string not explicitly listed is also accepted. -
bool
Eithertrueorfalse. -
dimension
A floating-point number followed by a unit, e.g.10pt. Units in the corpus includein(inch),pt(points, 72/inch),px("device-independent pixels", 96/inch), andcm. If the unit is omitted then points should be assumed. The number and unit may be separated by white space.The corpus also includes localized names for units. A reader must understand these to properly interpret the dimension:
- inch:
인치,pol.,cala,cali - point:
пт - centimeter:
см
- inch:
-
real
A floating-point number. -
int
An integer. -
color
A color in one of the forms#RRGGBBorRRGGBB, or the stringtransparent, or one of the standard Web color names. -
ref
ref ELEMENT
ref(ELEM1 | ELEM2 | ...)
The name from theidattribute in some element. If one or more elements are named, the name must refer to one of those elements, otherwise any element is acceptable.
All elements have an optional id attribute. If present, its value
must be unique. In practice many elements are assigned id attributes
that are never referenced.
The content specification for an element supports the following syntax:
-
ELEMENT
An element. -
A B
A followed by B. -
A | B | C
One of A or B or C. -
A?
Zero or one instances of A. -
A*
Zero or more instances of A. -
B+
One or more instances of A. -
(SUBEXPRESSION)
Grouping for a subexpression. -
EMPTY
No content. -
TEXT
Text and CDATA.
Element and attribute names are sometimes suffixed by another name in
square brackets to distinguish different uses of the same name. For
example, structure XML has two text elements, one inside container,
the other inside pageParagraph. The former is defined as
text[container_text] and referenced as container_text, the latter
defined as text[pageParagraph_text] and referenced as
pageParagraph_text.
This language is used in the PSPP source code for parsing structure
and detail XML members. Refer to src/output/spv/structure-xml.grammar
and src/output/spv/detail-xml.grammar for the full grammars.
The following example shows the contents of a typical structure member for a DESCRIPTIVES procedure. A real structure member is not indented. This example also omits most attributes, all XML namespace information, and the CSS from the embedded HTML:
<?xml version="1.0" encoding="utf-8"?>
<heading>
<label>Output</label>
<heading commandName="Descriptives">
<label>Descriptives</label>
<container>
<label>Title</label>
<text commandName="Descriptives" type="title">
<html lang="en">
<![CDATA[<head><style type="text/css">...</style></head><BR>Descriptives]]>
</html>
</text>
</container>
<container visibility="hidden">
<label>Notes</label>
<table commandName="Descriptives" subType="Notes" type="note">
<tableStructure>
<dataPath>00000000001_lightNotesData.bin</dataPath>
</tableStructure>
</table>
</container>
<container>
<label>Descriptive Statistics</label>
<table commandName="Descriptives" subType="Descriptive Statistics"
type="table">
<tableStructure>
<dataPath>00000000002_lightTableData.bin</dataPath>
</tableStructure>
</table>
</container>
</heading>
</heading>
The heading Element
heading[root_heading]
:creator-version?
:creator?
:creation-date-time?
:lockReader=bool?
:schemaLocation?
=> label pageSetup? (container | heading)*
heading
:creator-version?
:commandName?
:visibility[heading_visibility]=(collapsed)?
:locale?
:olang?
=> label (container | heading)*
A heading represents a tree of content that appears in an output
viewer window. It contains a label text string that is shown in the
outline view ordinarily followed by content containers or further nested
(sub)-sections of output. Unlike heading elements in HTML and other
common document formats, which precede the content that they head,
heading contains the elements that appear below the heading.
The root of a structure member is a special heading. The direct
children of the root heading elements in all structure members in an
SPV file are siblings. That is, the root heading in all of the
structure members conceptually represent the same node. The root
heading's label is ignored (see the label
element). The root heading in the first
structure member in the Zip file may contain a pageSetup element.
The schema implies that any heading may contain a sequence of any
number of heading and container elements. This does not work for
the root heading in practice, which must actually contain exactly one
container or heading child element. Furthermore, if the root
heading's child is a heading, then the structure member's name must
end in _heading.xml; if it is a container child, then it must not.
The following attributes have been observed on both document root and
nested heading elements.
creator-version
The version of the software that created this SPV file. A string of the formxxyyzzwwrepresents software version xx.yy.zz.ww, e.g.21000001is version 21.0.0.1. Trailing pairs of zeros are sometimes omitted, so that21,210000, and21000000are all version 21.0.0.0 (and the corpus contains all three of those forms).
The following attributes have been observed on document root heading
elements only:
-
creator
The directory in the file system of the software that created this SPV file. -
creation-date-time
The date and time at which the SPV file was written, in a locale-specific format, e.g.Friday, May 16, 2014 6:47:37 PM PDTorlunedì 17 marzo 2014 3.15.48 CETor evenFriday, December 5, 2014 5:00:19 o'clock PM EST. -
lockReader
Whether a reader should be allowed to edit the output. The possible values aretrueandfalse. The valuefalseis by far the most common. -
schemaLocation
This is actually an XML Namespace attribute. A reader may ignore it.
The following attributes have been observed only on nested heading
elements:
-
commandName
A locale-invariant identifier for the command that produced the output, e.g.Frequencies,T-Test,Non Par Corr. -
visibility
If this attribute is absent, the heading's content is expanded in the outline view. If it is set tocollapsed, it is collapsed. (This attribute is never present in a rootheadingbecause the root node is always expanded when a file is loaded, even though the UI can be used to collapse it interactively.) -
locale
The locale used for output, in Windows format, which is similar to the format used in Unix with the underscore replaced by a hyphen, e.g.en-US,en-GB,el-GR,sr-Cryl-RS. -
olang
The output language, e.g.en,it,es,de,pt-BR.
The label Element
label => TEXT
Every heading and container holds a label as its first child.
The label text is what appears in the outline pane of the GUI's viewer
window. PSPP also puts it into the outline of PDF output. The label
text doesn't appear in the output itself.
The text in label describes what it labels, often by naming the
statistical procedure that was executed, e.g. "Frequencies" or "T-Test".
Labels are often very generic, especially within a container, e.g.
"Title" or "Warnings" or "Notes". Label text is localized according to
the output language, e.g. in Italian a frequency table procedure is
labeled "Frequenze".
The user can edit labels to be anything they want. The corpus contains a few examples of empty labels, ones that contain no text, probably as a result of user editing.
The root heading in an SPV file has a label, like every
heading. It normally contains "Output" but its content is disregarded
anyway. The user cannot edit it.
The container Element
container
:visibility=(visible | hidden)
:page-break-before=(always)?
:text-align=(left | center)?
:width=dimension
=> label (table | container_text | graph | model | object | image | tree)
A container serves to contain and label a table, text, or other
kind of item.
This element has the following attributes.
-
visibility
Whether the container's content is displayed. "Notes" tables are often hidden; other data is usually visible. -
text-align
Alignment of text within the container. Observed with nestedtableandtextelements. -
width
The width of the container, e.g.1097px.
All of the elements that nest inside container (except the label)
have the following optional attribute.
commandName
As on theheadingelement. The corpus contains one example of wherecommandNameis present but set to the empty string.
The text Element (Inside container)
text[container_text]
:type[text_type]=(title | log | text | page-title)
:commandName?
:creator-version?
=> html
This text element is nested inside a container. There is a
different text element that is nested inside a pageParagraph.
This element has the following attributes.
-
commandName
See thecontainerelement. For output not specific to a command, this is simplylog. -
type
The semantics of the text. -
creator-version
As on theheadingelement.
The html Element
html :lang=(en) => TEXT
The element contains an HTML document as text (or, in practice, as
CDATA). In some cases, the document starts with <html> and ends with
</html>; in others the html element is implied. Generally the HTML
includes a head element with a CSS stylesheet. The HTML body often
begins with <BR>.
The HTML document uses only the following elements:
-
html
Sometimes, the document is enclosed with<html>...</html>. -
br
The HTML body often begins with<BR>and may contain it as well. -
b
i
u
Styling. -
font
The attributesface,color, andsizeare observed. The value ofcolortakes one of the forms#RRGGBBorrgb (R, G, B). The value ofsizeis a number between 1 and 7, inclusive.
The CSS in the corpus is simple. To understand it, a parser only
needs to be able to skip white space, <!--, and -->, and parse style
only for p elements. Only the following properties matter:
-
color
In the formRRGGBB, e.g.000000, with no leading#. -
font-weight
Eitherboldornormal. -
font-style
Eitheritalicornormal. -
text-decoration
Eitherunderlineornormal. -
font-family
A font name, commonlyMonospacedorSansSerif. -
font-size
Values claim to be in points, e.g.14pt, but the values are actually in "device-independent pixels" (px), at 96/inch.
This element has the following attributes.
lang
This always containsenin the corpus.
The table Element
table
:VDPId?
:ViZmlSource?
:activePageId=int?
:commandName
:creator-version?
:displayFiltering=bool?
:maxNumCells=int?
:orphanTolerance=int?
:rowBreakNumber=int?
:subType
:tableId
:tableLookId?
:type[table_type]=(table | note | warning)
=> tableProperties? tableStructure
tableStructure => path? dataPath csvPath?
This element has the following attributes.
-
commandName
See thecontainerelement. -
type
One oftable,note, orwarning. -
subType
The locale-invariant command ID for the particular kind of output that this table represents in the procedure. This can be the same ascommandNamee.g.Frequencies, or different, e.g.Case Processing Summary. Generic subtypesNotesandWarningsare often used. -
tableId
A number that uniquely identifies the table within the SPV file, typically a large negative number such as-4147135649387905023. -
creator-version
As on theheadingelement. In the corpus, this is only present for version 21 and up and always includes all 8 digits.
This element contains the following:
-
tableProperties: See Legacy Properties, for details. -
tableStructure, which in turn contains:-
Both
pathanddataPathfor legacy members. -
dataPathbut notpathfor light detail binary members. -
The usage of
csvPathis rare and not yet understood.
See SPSS Viewer File Format for more information on how structure members refer to tables.
-
The graph Element
graph
:VDPId?
:ViZmlSource?
:commandName?
:creator-version?
:dataMapId?
:dataMapURI?
:editor?
:refMapId?
:refMapURI?
:csvFileIds?
:csvFileNames?
=> dataPath? path csvPath?
This element represents a graph. The dataPath and path elements
name the Zip members that give the details of the graph. Normally, both
elements are present; there is only one counterexample in the corpus.
csvPath only appears in one SPV file in the corpus, for two graphs.
In these two cases, dataPath, path, and csvPath all appear. These
csvPath name Zip members with names of the form NUMBER_csv.bin,
where NUMBER is a many-digit number and the same as the csvFileIds.
The named Zip members are CSV text files (despite the .bin extension).
The CSV files are encoded in UTF-8 and begin with a U+FEFF byte-order
marker.
The model Element
model
:PMMLContainerId?
:PMMLId
:StatXMLContainerId
:VDPId
:auxiliaryViewName
:commandName
:creator-version
:mainViewName
=> ViZml? dataPath? path | pmmlContainerPath statsContainerPath
pmmlContainerPath => TEXT
statsContainerPath => TEXT
ViZml :viewName? => TEXT
This element represents a model. The dataPath and path elements
name the Zip members that give the details of the model. Normally, both
elements are present; there is only one counterexample in the corpus.
The details are unexplored. The ViZml element contains base-64
encoded text, that decodes to a binary format with some embedded text
strings, and path names an Zip member that contains XML.
Alternatively, pmmlContainerPath and statsContainerPath name Zip
members with .scf extension.
The object and image Elements
object
:commandName?
:type[object_type]=(unknown)?
:uri
=> EMPTY
image
:commandName?
:VDPId
=> dataPath
These two elements represent an image in PNG format. They are
equivalent and the corpus contains examples of both. The only
difference is the syntax: for object, the uri attribute names the
Zip member that contains a PNG file; for image, the text of the inner
dataPath element names the Zip member.
PSPP writes object in output but there is no strong reason to
choose this form.
The corpus only contains PNG image files.
The tree Element
tree
:commandName
:creator-version
:name
:type
=> dataPath path
This element represents a tree. The dataPath and path elements
name the Zip members that give the details of the tree. The details are
unexplored.
Path Elements
dataPath => TEXT
path => TEXT
csvPath => TEXT
These element contain the name of the Zip members that hold details for a container. For tables:
-
When a "light" format is used, only
dataPathis present, and it names a.binmember of the Zip file that haslightin its name, e.g.0000000001437_lightTableData.bin. See Light Detail Member Format for light format details. -
When the legacy format is used, both are present. In this case,
dataPathnames a Zip member with a legacy binary format that contains relevant data (see Legacy Detail Member Binary Format), andpathnames a Zip member that uses an XML format (see Legacy Detail Member XML Member Format).
Graphs normally follow the legacy approach described above. The
corpus contains one example of a graph with path but not dataPath.
The reason is unexplored.
Models use path but not dataPath. See graph
element, for more information.
These elements have no attributes.
The pageSetup Element
pageSetup
:initial-page-number=int?
:chart-size=(as-is | full-height | half-height | quarter-height | OTHER)?
:margin-left=dimension?
:margin-right=dimension?
:margin-top=dimension?
:margin-bottom=dimension?
:paper-height=dimension?
:paper-width=dimension?
:reference-orientation?
:space-after=dimension?
=> pageHeader pageFooter
pageHeader => pageParagraph?
pageFooter => pageParagraph?
pageParagraph => pageParagraph_text
The pageSetup element has the following attributes.
-
initial-page-number
The page number to put on the first page of printed output. Usually1. -
chart-size
One of the listed, self-explanatory chart sizes,quarter-height, or a localization (!) of one of these (e.g.dimensione attuale,Wie vorgegeben). -
margin-left
margin-right
margin-top
margin-bottom
Margin sizes, e.g.0.25in. -
paper-height
paper-width
Paper sizes. -
reference-orientation
Indicates the orientation of the output page. Either0deg(portrait) or90deg(landscape), -
space-after
The amount of space between printed objects, typically12pt.
The text Element (Inside pageParagraph)
text[pageParagraph_text] :type=(title | text) => TEXT
This text element is nested inside a pageParagraph. There is a
different text element that is nested inside a container.
The element is either empty, or contains CDATA that holds almost-XHTML
text: in the corpus, either an html or p element. It is
almost-XHTML because the html element designates the default
namespace as http://xml.spss.com/spss/viewer/viewer-tree instead of
an XHTML namespace, and because the CDATA can contain substitution
variables. The following variables are supported:
-
&[Date]
&[Time]
The current date or time in the preferred format for the locale. -
&[Head1]
&[Head2]
&[Head3]
&[Head4]
First-, second-, third-, or fourth-level heading. -
&[PageTitle]
The page title. -
&[Filename]
Name of the output file. -
&[Page]
The page number.
Typical contents (indented for clarity):
<html xmlns="http://xml.spss.com/spss/viewer/viewer-tree">
<head></head>
<body>
<p style="text-align:right; margin-top: 0">Page &[Page]</p>
</body>
</html>
This element has the following attributes.
type
Alwaystext.
Light Detail Member Format
This section describes the format of "light" detail .bin members.
- Binary Format Conventions
- Top-Level Structure
- Header
- Titles
- Footnotes
- Areas
- Borders
- Print Settings
- Table Settings
- Formats
- Dimensions
- Axes
- Cells
- Value
- ValueMod
Binary Format Conventions
These members have a binary format which we describe here in terms of a context-free grammar using the following conventions:
-
NonTerminal ⇒ ...
Nonterminals have CamelCaps names, and ⇒ indicates a production. The right-hand side of a production is often broken across multiple lines. Break points are chosen for aesthetics only and have no semantic significance. -
00, 01, ..., ff.
A bytes with a fixed value, written as a pair of hexadecimal digits. -
i0, i1, ..., i9, i10, i11, ...
ib0, ib1, ..., ib9, ib10, ib11, ...
A 32-bit integer in little-endian or big-endian byte order, respectively, with a fixed value, written in decimal. Prefixed byifor little-endian oribfor big-endian. -
byte
A byte. -
bool
A byte with value 0 or 1. -
int16
be16
A 16-bit unsigned integer in little-endian or big-endian byte order, respectively. -
int32
be32
A 32-bit unsigned integer in little-endian or big-endian byte order, respectively. -
int64
be64
A 64-bit unsigned integer in little-endian or big-endian byte order, respectively. -
double
A 64-bit IEEE floating-point number. -
float
A 32-bit IEEE floating-point number. -
string
bestring
A 32-bit unsigned integer, in little-endian or big-endian byte order, respectively, followed by the specified number of bytes of character data. (The encoding is indicated by theFormatsnonterminal.) -
X?
X is optional, e.g.00?is an optional zero byte. -
X*N
X is repeated N times, e.g.byte*10for ten arbitrary bytes. -
X[NAME]
Gives X the specified NAME. Names are used in textual explanations. They are also used, also bracketed, to indicate counts, e.g.int32[n] byte*[n]for a 32-bit integer followed by the specified number of arbitrary bytes. -
A | B
Either A or B. -
(X)
Parentheses are used for grouping to make precedence clear, especially in the presence of|, e.g. in00 (01 | 02 | 03) 00. -
count(X)
becount(X)
A 32-bit unsigned integer, in little-endian or big-endian byte order, respectively, that indicates the number of bytes in X, followed by X itself. -
v1(X)
In a version 1.binmember, X; in version 3, nothing. (The.binheader indicates the version.) -
v3(X)
In a version 3.binmember, X; in version 1, nothing.
PSPP uses this grammar to parse light detail members. See
src/output/spv/light-binary.grammar in the PSPP source tree for the
full grammar.
Little-endian byte order is far more common in this format, but a few pieces of the format use big-endian byte order.
Light detail members express linear units in two ways: points (pt), at 72/inch, and "device-independent pixels" (px), at 96/inch. To convert from pt to px, multiply by 1.33 and round up. To convert from px to pt, divide by 1.33 and round down.
Top-Level Structure
A "light" detail member .bin consists of a number of sections
concatenated together, terminated by an optional byte 01:
Table =>
Header Titles Footnotes
Areas Borders PrintSettings TableSettings Formats
Dimensions Axes Cells
01?
Header
An SPV light member begins with a 39-byte header:
Header =>
01 00
(i1 | i3)[version]
bool[x0]
bool[x1]
bool[rotate-inner-column-labels]
bool[rotate-outer-row-labels]
bool[x2]
int32[x3]
int32[min-col-heading-width] int32[max-col-heading-width]
int32[min-row-heading-width] int32[max-row-heading-width]
int64[table-id]
version is a version number that affects the interpretation of some
of the other data in the member. We will refer to "version 1" and
"version 3" later on and use v1(...) and v3(...) for
version-specific formatting (as described previously).
If rotate-inner-column-labels is 1, then column labels closest to
the data are rotated 90° counterclockwise; otherwise, they are shown in
the normal way.
If rotate-outer-row-labels is 1, then row labels farthest from the
data are rotated 90° counterclockwise; otherwise, they are shown in the
normal way.
min-col-heading-width, max-col-heading-width,
min-row-heading-width, and max-row-heading-width are measurements in
1/96 inch units (called "device independent pixel" units in Windows)
whose values influence column widths. For the purpose of interpreting
these values, a table is divided into the three regions shown below:
┌──────────────────┬─────────────────────────────────────────────────┐
│ │ column headings │
│ ├─────────────────────────────────────────────────┤
│ corner │ │
│ and │ │
│ row headings │ data │
│ │ │
│ │ │
└──────────────────┴─────────────────────────────────────────────────┘
min-col-heading-width and max-col-heading-width apply to the
columns in the column headings region. min-col-heading-width is the
minimum width that any of these columns will be given automatically. In
addition, max-col-heading-width is the maximum width that a column
will be assigned to accommodate a long label in the column headings
cells. These columns will still be made wider to accommodate wide data
values in the data region.
min-row-heading-width is the minimum width that a column in the
corner and row headings region will be given automatically.
max-col-heading-width is the maximum width that a column in this
region will be assigned to accomodate a long label. This region doesn't
include data, so data values don't affect column widths.
table-id is a binary version of the tableId attribute in the
structure member that refers to the detail member. For example, if
tableId is -4122591256483201023, then table-id would be
0xc6c99d183b300001.
The meaning of the other variable parts of the header is not known.
A writer may safely use version 3, true for x0, false for x1, true
for x2, and 0x15 for x3.
Titles
Titles =>
Value[title] 01?
Value[subtype] 01? 31
Value[user-title] 01?
(31 Value[corner-text] | 58)
(31 Value[caption] | 58)
The Titles follow the Header and specify the table's title, caption,
and corner text.
The user-title reflects any user editing of the title text or
style. The title is the title originally generated by the procedure.
Both of these are appropriate for presentation and localized to the
user's language. For example, for a frequency table, title and
user-title normally name the variable and c is simply "Frequencies".
subtype is the same as the subType attribute in the table
structure XML element that referred
to this member.
The corner-text, if present, is shown in the upper-left corner of
the table, above the row headings and to the left of the column
headings. It is usually absent. When row dimension labels are
displayed in the corner (see show-row-labels-in-corner), corner text
is hidden.
The caption, if present, is shown below the table. caption
reflects user editing of the caption.
Footnotes
Footnotes => int32[n-footnotes] Footnote*[n-footnotes]
Footnote => Value[text] (58 | 31 Value[marker]) int32[show]
Each footnote has text and an optional custom marker (such as
*).
The syntax for Value would allow footnotes (and their markers) to
reference other footnotes, but in practice this doesn't work.
show is a 32-bit signed integer. It is positive to show the
footnote or negative to hide it. Its magnitude is often 1, and in other
cases tends to be the number of references to the footnote. It is safe
to write 1 to show a footnote and -1 to hide it.
Areas
Areas => 00? Area*8
Area =>
byte[index] 31
string[typeface] float[size] int32[style] bool[underline]
int32[halign] int32[valign]
string[fg-color] string[bg-color]
bool[alternate] string[alt-fg-color] string[alt-bg-color]
v3(int32[left-margin] int32[right-margin] int32[top-margin] int32[bottom-margin])
Each Area represents the style for a different area of the table.
index is the 1-based index of the Area, i.e. 1 for the first
Area, through 8 for the final Area. The following table shows the
index values and the areas that they represent:
index | Area |
|---|---|
| 1 | Title |
| 2 | Caption |
| 3 | Footer |
| 4 | Corner |
| 5 | Column labels |
| 6 | Row labels |
| 7 | Data |
| 8 | Layers |
typeface is the string name of the font used in the area. In the
corpus, this is SansSerif in over 99% of instances and Times New Roman in the rest.
size is the size of the font, in px. The most common size in
the corpus is 12 px. Even though size has a floating-point type, in
the corpus its values are always integers.
style is a bit mask. Bit 0 (with value 1) is set for bold, bit 1
(with value 2) is set for italic.
underline is 1 if the font is underlined, 0 otherwise.
halign specifies horizontal alignment:
halign | Alignment |
|---|---|
| 0 | Center |
| 2 | Left |
| 4 | Right |
| 64173 | Mixed |
Mixed alignment varies according to type: string data is left-justified, numbers and most other formats are right-justified.
valign specifies vertical alignment:
valign | Alignment |
|---|---|
| 0 | Center |
| 1 | Top |
| 3 | Bottom |
fg-color and bg-color are the foreground color and background
color, respectively. In the corpus, these are always #000000 and
#ffffff, respectively.
alternate is 1 if rows should alternate colors, 0 if all rows
should be the same color. When alternate is 1, alt-fg-color and
alt-bg-color specify the colors for the alternate rows; otherwise they
are empty strings.
left-margin, right-margin, top-margin, and bottom-margin are
measured in px.
Borders
Borders =>
count(
ib1[endian]
be32[n-borders] Border*[n-borders]
bool[show-grid-lines]
00 00 00)
Border =>
be32[index]
be32[stroke-type]
be32[color]
Borders reflects how borders between regions are drawn.
The fixed value of endian can be used to validate the endianness.
show-grid-lines is 1 to draw grid lines, otherwise 0.
Each Border describes one kind of border. n-borders seems to
always be 19. Each index appears once (although in an
unpredictable order) and correspond to the following borders:
index | Borders |
|---|---|
| 0 | Title. |
| 1...4 | Left, top, right, and bottom outer frame. |
| 5...8 | Left, top, right, and bottom inner frame. |
| 9, 10 | Left and top of data area. |
| 11, 12 | Horizontal and vertical dimension rows. |
| 13, 14 | Horizontal and vertical dimension columns. |
| 15, 16 | Horizontal and vertical category rows. |
| 17, 18 | Horizontal and vertical category columns. |
stroke-type describes how a border is drawn, as one of:
stroke-type | Border style |
|---|---|
| 0 | No line. |
| 1 | Solid line. |
| 2 | Dashed line. |
| 3 | Thick line. |
| 4 | Thin line. |
| 5 | Double line. |
color is an RGB color. Bits 24-31 are alpha, bits 16-23 are red,
8-15 are green, 0-7 are blue. An alpha of 255 indicates an opaque
color, therefore opaque black is 0xff000000.
Print Settings
PrintSettings =>
count(
ib1[endian]
bool[all-layers]
bool[paginate-layers]
bool[fit-width]
bool[fit-length]
bool[top-continuation]
bool[bottom-continuation]
be32[n-orphan-lines]
bestring[continuation-string])
PrintSettings reflects settings for printing. The fixed value of
endian can be used to validate the endianness.
all-layers is 1 to print all layers, 0 to print only the layer
designated by current-layer in TableSettings.
paginate-layers is 1 to print each layer at the start of a new
page, 0 otherwise. (This setting is honored only all-layers is 1,
since otherwise only one layer is printed.)
fit-width and fit-length control whether the table is shrunk to
fit within a page's width or length, respectively.
n-orphan-lines is the minimum number of rows or columns to put in
one part of a table that is broken across pages.
If top-continuation is 1, then continuation-string is printed at
the top of a page when a table is broken across pages for printing;
similarly for bottom-continuation and the bottom of a page. Usually,
continuation-string is empty.
Table Settings
TableSettings =>
count(
v3(
ib1[endian]
be32[x5]
be32[current-layer]
bool[omit-empty]
bool[show-row-labels-in-corner]
bool[show-alphabetic-markers]
bool[footnote-marker-superscripts]
byte[x6]
becount(
Breakpoints[row-breaks] Breakpoints[column-breaks]
Keeps[row-keeps] Keeps[column-keeps]
PointKeeps[row-point-keeps] PointKeeps[column-point-keeps]
)
bestring[notes]
bestring[table-look]
)...)
Breakpoints => be32[n-breaks] be32*[n-breaks]
Keeps => be32[n-keeps] Keep*[n-keeps]
Keep => be32[offset] be32[n]
PointKeeps => be32[n-point-keeps] PointKeep*[n-point-keeps]
PointKeep => be32[offset] be32 be32
TableSettings reflects display settings. The fixed value of
endian can be used to validate the endianness.
current-layer is the displayed layer. Suppose there are \(d\)
layers, numbered 1 through \(d\) in the order given in the
Dimensions, and that the displayed value of dimension
\(i\) is \(d_i, 0 \le x_i < n_i\), where \(n_i\) is the number
of categories in dimension \(i\). Then current-layer is the
\(k\) calculated by the following algorithm:
let \(k = 0\).
for each \(i\) from \(d\) downto 1:
\(\quad k = (n_i \times k) + x_i\).
If omit-empty is 1, empty rows or columns (ones with nothing in any
cell) are hidden; otherwise, they are shown.
If show-row-labels-in-corner is 1, then row labels are shown in the
upper left corner; otherwise, they are shown nested.
If show-alphabetic-markers is 1, markers are shown as letters (e.g.
a, b, c, ...); otherwise, they are shown as numbers starting from
1.
When footnote-marker-superscripts is 1, footnote markers are shown
as superscripts, otherwise as subscripts.
The Breakpoints are rows or columns after which there is a page
break; for example, a row break of 1 requests a page break after the
second row. Usually no breakpoints are specified, indicating that page
breaks should be selected automatically.
The Keeps are ranges of rows or columns to be kept together without
a page break; for example, a row Keep with offset 1 and n 10
requests that the 10 rows starting with the second row be kept
together. Usually no Keeps are specified.
The PointKeeps seem to be generated automatically based on
user-specified Keeps. They seems to indicate a conversion from rows or
columns to pixel or point offsets.
notes is a text string that contains user-specified notes. It is
displayed when the user hovers the cursor over the table, like text in
the title attribute in HTML. It is not printed. It is usually empty.
table-look is the name of a SPSS "TableLook" table style, such as
"Default" or "Academic"; it is often empty.
TableSettings ends with an arbitrary number of null bytes. A writer
may safely write 82 null bytes.
A writer may safely use 4 for x5 and 0 for x6.
Formats
Formats =>
int32[n-widths] int32*[n-widths]
string[locale]
int32[current-layer]
bool[x7] bool[x8] bool[x9]
Y0
CustomCurrency
count(
v1(X0?)
v3(count(X1 count(X2)) count(X3)))
Y0 => int32[epoch] byte[decimal] byte[grouping]
CustomCurrency => int32[n-ccs] string*[n-ccs]
If n-widths is nonzero, then the accompanying integers are column
widths as manually adjusted by the user.
locale is a locale including an encoding, such as
en_US.windows-1252 or it_IT.windows-1252. (locale is often
duplicated in Y1, described below).
epoch is the year that starts the epoch. A 2-digit year is
interpreted as belonging to the 100 years beginning at the epoch. The
default epoch year is 69 years prior to the current year; thus, in 2017
this field by default contains 1948. In the corpus, epoch ranges from
1943 to 1948, plus some contain -1.
decimal is the decimal point character. The observed values are
. and ,.
grouping is the grouping character. Usually, it is , if
decimal is ., and vice versa. Other observed values are '
(apostrophe), (space), and zero (presumably indicating that digits
should not be grouped).
n-ccs is observed as either 0 or 5. When it is 5, the following
strings are CCA through
CCE format strings.
Most commonly these are all -,,, but other strings occur.
A writer may safely use false for x7, x8, and x9.
X0
X0 only appears, optionally, in version 1 members.
X0 => byte*14 Y1 Y2
Y1 =>
string[command] string[command-local]
string[language] string[charset] string[locale]
bool[x10] bool[include-leading-zero] bool[x12] bool[x13]
Y0
Y2 => CustomCurrency byte[missing] bool[x17]
command describes the statistical procedure that generated the
output, in English. It is not necessarily the literal syntax name of
the procedure: for example, NPAR TESTS becomes "Nonparametric Tests."
command-local is the procedure's name, translated into the output
language; it is often empty and, when it is not, sometimes the same as
command.
include-leading-zero is the
LEADZERO setting for the table, where
false is OFF (the default) and true is ON.
missing is the character used to indicate that a cell contains a
missing value. It is always observed as ..
A writer may safely use false for x10 and x17 and true for x12
and x13.
X1
X1 only appears in version 3 members.
X1 =>
bool[x14]
byte[show-title]
bool[x16]
byte[lang]
byte[show-variables]
byte[show-values]
int32[x18] int32[x19]
00*17
bool[x20]
bool[show-caption]
lang may indicate the language in use. Some values and their
apparent meanings are:
| Value | Language |
|---|---|
| 0 | en |
| 1 | de |
| 2 | es |
| 3 | it |
| 5 | ko |
| 6 | pl |
| 8 | zh-tw |
| 10 | pt_BR |
| 11 | fr |
show-variables determines how variables are displayed by default:
| Value | Meaning |
|---|---|
| 0 | Use global default (the most common value) |
| 1 | Variable name only |
| 2 | Variable label only (when available) |
| 3 | Both (name followed by label, separated by a space) |
show-values is a similar setting for values:
| Value | Meaning |
|---|---|
| 0 | Use global default (the most common value) |
| 1 | Value only |
| 2 | Value label only (when available) |
| 3 | Both |
show-title is 1 to show the caption, 10 to hide it.
show-caption is true to show the caption, false to hide it.
A writer may safely use false for x14, false for x16, 0 for
lang, -1 for x18 and x19, and false for x20.
X2
X2 only appears in version 3 members.
X2 =>
int32[n-row-heights] int32*[n-row-heights]
int32[n-style-map] StyleMap*[n-style-map]
int32[n-styles] StylePair*[n-styles]
count((i0 i0)?)
StyleMap => int64[cell-index] int16[style-index]
If present, n-row-heights and the accompanying integers are row
heights as manually adjusted by the user.
The rest of X2 specifies styles for data cells. At first glance
this is odd, because each data cell can have its own style embedded as
part of the data, but in practice X2 specifies a style for a cell
only if that cell is empty (and thus does not appear in the data at
all). Each StyleMap specifies the index of a blank cell, calculated
the same was as in the Cells, along with a 0-based index
into the accompanying StylePair array.
A writer may safely omit the optional i0 i0 inside the
count(...).
X3
X3 only appears in version 3 members.
X3 =>
01 00 byte[x21] 00 00 00
Y1
double[small] 01
(string[dataset] string[datafile] i0 int32[date] i0)?
Y2
(int32[x22] i0 01?)?
small is a small real number. In the corpus, it overwhelmingly
takes the value 0.0001, with zero occasionally seen. Nonzero numbers
with format 40 (see Value) whose magnitudes are smaller than
displayed in scientific notation. (Thus, a small of zero prevents
scientific notation from being chosen.)
dataset is the name of the dataset analyzed to produce the output,
e.g. DataSet1, and datafile the name of the file it was read from,
e.g. C:\Users\foo\bar.sav. The latter is sometimes the empty string.
date is a date, as seconds since the epoch, i.e. since January 1,
1970. Pivot tables within an SPV file often have dates a few minutes
apart, so this is probably a creation date for the table rather than for
the file.
Sometimes dataset, datafile, and date are present and other
times they are absent. The reader can distinguish by assuming that they
are present and then checking whether the presumptive dataset contains
a null byte (a valid string never will).
x22 is usually 0 or 2000000.
A writer may safely use 4 for x21 and omit x22 and the other
optional bytes at the end.
Encoding
Formats contains several indications of character encoding:
-
localeinFormatsitself. -
localeinY1(in version 1,Y1is optionally nested insideX0; in version 3,Y1is nested insideX3). -
charsetin version 3, inY1. -
langin X1, in version 3.
charset, if present, is a good indication of character encoding, and
in its absence the encoding suffix on locale in Formats will work.
A reader may disregard locale in Y1, because it is normally the
same as locale in Formats, and it is only present if charset is
also.
lang is not helpful and should be ignored for character encoding
purposes.
However, the corpus contains many examples of light members whose strings are encoded in UTF-8 despite declaring some other character set. Furthermore, the corpus contains several examples of light members in which some strings are encoded in UTF-8 (and contain multibyte characters) and other strings are encoded in another character set (and contain non-ASCII characters). PSPP treats any valid UTF-8 string as UTF-8 and only falls back to the declared encoding for strings that are not valid UTF-8.
The pspp-output program's strings command can help analyze the
encoding in an SPV light member. Use pspp-output --help-dev to see
its usage.
Dimensions
A pivot table presents multidimensional data. A Dimension identifies the categories associated with each dimension.
Dimensions => int32[n-dims] Dimension*[n-dims]
Dimension =>
Value[name] DimProperties
int32[n-categories] Category*[n-categories]
DimProperties =>
byte[x1]
byte[x2]
int32[x3]
bool[hide-dim-label]
bool[hide-all-labels]
01 int32[dim-index]
name is the name of the dimension, e.g. Variables, Statistics,
or a variable name.
The meanings of x1 and x3 are unknown. x1 is usually 0 but
many other values have been observed. A writer may safely use 0 for
x1 and 2 for x3.
x2 is 0, 1, or 2. For a pivot table with L layer dimensions, R row
dimensions, and C column dimensions, x2 is 2 for the first L
dimensions, 0 for the next R dimensions, and 1 for the remaining C
dimensions. This does not mean that the layer dimensions must be
presented first, followed by the row dimensions, followed by the
column dimensions--on the contrary, they are frequently in a different
order—but x2 must follow this pattern to prevent the pivot table
from being misinterpreted.
If hide-dim-label is 00, the pivot table displays a label for the
dimension itself. Because usually the group and category labels are
enough explanation, it is usually 01.
If hide-all-labels is 01, the pivot table omits all labels for the
dimension, including group and category labels. It is usually 00. When
hide-all-labels is 01, hide-dim-label is ignored.
dim-index is usually the 0-based index of the dimension, e.g. 0 for
the first dimension, 1 for the second, and so on. Sometimes it is -1.
There is no visible difference. A writer may safely use the 0-based
index.
Categories
Categories are arranged in a tree. Only the leaf nodes in the tree are really categories; the others just serve as grouping constructs.
Category => Value[name] (Leaf | Group)
Leaf => 00 00 bool[x24] i2 int32[leaf-index] i0
Group =>
bool[merge] 00 01 int32[x23]
i-1 int32[n-subcategories] Category*[n-subcategories]
name is the name of the category (or group).
A Leaf represents a leaf category. The Leaf's leaf-index is a
nonnegative integer unique within the Dimension and less than
n-categories in the Dimension. If the user does not sort or
rearrange the categories, then leaf-index starts at 0 for the first
Leaf in the dimension and increments by 1 with each successive
Leaf. If the user does sort or rearrange the categories, then the
order of categories in the file reflects that change and leaf-index
reflects the original order.
A dimension can have no leaf categories at all. A table that contains such a dimension necessarily has no data at all.
A Group is a group of nested categories. Usually a Group contains
at least one Category, so that n-subcategories is positive, but
Groups with zero subcategories have been observed.
If a Group's merge is 00, the most common value, then the group is
really a distinct group that should be represented as such in the visual
representation and user interface. If merge is 01, the categories in
this group should be shown and treated as if they were direct children
of the group's containing group (or if it has no parent group, then
direct children of the dimension), and this group's name is irrelevant
and should not be displayed. (Merged groups can be nested!)
Writers need not use merged groups.
A Group's x23 appears to be i2 when all of the categories within
a group are leaf categories that directly represent data values for a
variable (e.g. in a frequency table or crosstabulation, a group of
values in a variable being tabulated) and i0 otherwise. A writer may
safely write a constant 0 in this field.
x24 is usually 0. Its meaning is unexplored.
Axes
After the dimensions come assignment of each dimension to one of the axes: layers, rows, and columns.
Axes =>
int32[n-layers] int32[n-rows] int32[n-columns]
int32*[n-layers] int32*[n-rows] int32*[n-columns]
The values of n-layers, n-rows, and n-columns each specifies
the number of dimensions displayed in layers, rows, and columns,
respectively. Any of them may be zero. Their values sum to
n-dimensions from Dimensions.
The following n-dimensions integers, in three groups, are a
permutation of the 0-based dimension numbers. The first n-layers
integers specify each of the dimensions represented by layers, the next
n-rows integers specify the dimensions represented by rows, and the
final n-columns integers specify the dimensions represented by
columns. When there is more than one dimension of a given kind, the
inner dimensions are given first. (For the layer axis, this means that
the first dimension is at the bottom of the list and the last dimension
is at the top when the current layer is displayed.)
Cells
The final part of an SPV light member contains the actual data.
Cells => int32[n-cells] Cell*[n-cells]
Cell => int64[index] v1(00?) Value
A Cell consists of an index and a Value. Suppose there are
\(d\) dimensions, numbered 1 through \(d\) in the order given in
the Dimensions previously, and that dimension \(i\)
has \(n_i\) categories. Consider the cell at coordinates \(x_i, 1
\le i \le d\), and note that \(0 \le x_i < n_i\). Then the index
\(k\) is calculated by the following algorithm:
let \(k = 0\).
for each \(i\) from 1 to \(d\):
\(\quad k = (n_i \times k) + x_i\)
For example, suppose there are 3 dimensions with 3, 4, and 5
categories, respectively. The cell at coordinates (1, 2, 3) has index
\(k = 5 \times (4 \times (3 \times 0 + 1) + 2) + 3 = 33\). Within a
given dimension, the index is the leaf-index in a Leaf.
Value
Value is used throughout the SPV light member format. It boils down to
a number or a string.
Value => 00? 00? 00? 00? RawValue
RawValue =>
01 ValueMod int32[format] double[x]
| 02 ValueMod int32[format] double[x]
string[var-name] string[value-label] byte[show]
| 03 string[local] ValueMod string[id] string[c] bool[fixed]
| 04 ValueMod int32[format] string[value-label] string[var-name]
byte[show] string[s]
| 05 ValueMod string[var-name] string[var-label] byte[show]
| 06 string[local] ValueMod string[id] string[c]
| ValueMod string[template] int32[n-args] Argument*[n-args]
Argument =>
i0 Value
| int32[x] i0 Value*[x] /* x > 0 */
There are several possible encodings, which one can distinguish by the first nonzero byte in the encoding.
-
01
The numeric valuex, intended to be presented to the user formatted according toformat, which is about the same as the format described for system files. The exception is that format 40 is notMTIMEbut instead approximately a synonym forFformat with a different rule for whether a value is shown in scientific notation: a value in format 40 is shown in scientific notation if and only if it is nonzero and its magnitude is less thansmall.Values of 0 or 1 or 0x10000 are sometimes seen as
format. PSPP interprets these as F40.2.Most commonly,
formathas width 40 (the maximum).An
xwith the maximum negative double value-DBL_MAXrepresents the system-missing valueSYSMIS. (HIGHESTandLOWESThave not been observed.) See System File Format for more about these special values. -
02
Similar to01, with the additional information thatxis a value of variablevar-nameand has value labelvalue-label. Bothvar-nameandvalue-labelcan be the empty string, the latter very commonly.showdetermines whether to show the numeric value or the value label:showMeaning 0 Use default specified in show-values1 Value only 2 Label only 3 Both value and label -
03
A text string, in two forms:cis in English, and sometimes abbreviated or obscure, andlocalis localized to the user's locale. In an English-language locale, the two strings are often the same, and in the cases where they differ,localis more appropriate for a user interface, e.g.cof "Not a PxP table for MCN..." versuslocalof "Computed only for a PxP table, where P must be greater than 1."candlocalare always either both empty or both nonempty.idis a brief identifying string whose form seems to resemble a programming language identifier, e.g.cumulative_percentorfactor_14. It is not unique.fixedis:-
00for text taken from user input, such as syntax fragment, expressions, file names, data set names.idis always the empty string. -
01for fixed text strings such as names of procedures or statistics.idis sometimes empty.
-
-
04
The string values, intended to be presented to the user formatted according toformat. The format for a string is not too interesting, and the corpus contains many clearly invalid formats likeA16.39orA255.127orA134.1, so readers should probably entirely disregard the format. PSPP only checksformatto distinguish AHEX format.sis a value of variablevar-nameand has value labelvalue-label.var-nameis never empty butvalue-labelis commonly empty.showhas the same meaning as in the encoding for02. -
05
Variablevar-namewith variable labelvar-label. In the corpus,var-nameis rarely empty andvar-labelis often empty.showdetermines whether to show the variable name or the variable label. A value of 1 means to show the name, 2 to show the label, 3 to show both, and 0 means to use the default specified inshow-variables. -
06
Similar to type03, withfixedassumed to be true. -
otherwise
When the first byte of aRawValueis not one of the above, theRawValuestarts with aValueMod, whose syntax is described in the next section. (AValueModalways begins with byte 31 or 58.)This case is a template string, analogous to
printf, followed by one or moreArguments, each of which has one or more values. The template string is copied directly into the output except for the following special syntax:-
\%
\:
\[
\]
Each of these expands to the character following\\, to escape characters that have special meaning in template strings. These are effective inside and outside the[...]syntax forms described below. -
\n
Expands to a new-line, inside or outside the[...]forms described below. -
^I
Expands to a formatted version of argumentI, which must have only a single value. For example,^1expands to the first argument'svalue. -
[:A:]I
ExpandsAfor each of the values inI.Ashould contain one or more^Jconversions, which are drawn from the values for argumentIin order. Some examples from the corpus:-
[:^1:]1
All of the values for the first argument, concatenated. -
[:^1\n:]1
Expands to the values for the first argument, each followed by a new-line. -
[:^1 = ^2:]2
Expands toX = Ywhere X is the second argument's first alue and Y is its second value. (This would be used only if the argument has two values. If there were more values, the second and third values would be directly concatenated, which would look funny.)
-
-
[A:B:]I
This extends the previous form so that the first values are expanded usingAand later values are expanded usingB. For an unknown reason, withinAthe^Jconversions are instead written as%J. Some examples from the corpus:-
[%1:*^1:]1
Expands to all of the values for the first argument, separated by*. -
[%1 = %2:, ^1 = ^2:]1
Given appropriate values for the first argument, expands toX = 1, Y = 2, Z = 3. -
[%1:, ^1:]1
Given appropriate values, expands to1, 2, 3.
-
The template string is localized to the user's locale.
-
A writer may safely omit all of the optional 00 bytes at the beginning of a Value, except that it should write a single 00 byte before a templated Value.
ValueMod
A ValueMod can specify special modifications to a Value.
ValueMod =>
58
| 31
int32[n-refs] int16*[n-refs]
int32[n-subscripts] string*[n-subscripts]
v1(00 (i1 | i2) 00? 00? int32 00? 00?)
v3(count(TemplateString StylePair))
TemplateString => count((count((i0 (58 | 31 55))?) (58 | 31 string[id]))?)
StylePair =>
(31 FontStyle | 58)
(31 CellStyle | 58)
FontStyle =>
bool[bold] bool[italic] bool[underline] bool[show]
string[fg-color] string[bg-color]
string[typeface] byte[size]
CellStyle =>
int32[halign] int32[valign] double[decimal-offset]
int16[left-margin] int16[right-margin]
int16[top-margin] int16[bottom-margin]
A ValueMod that begins with 31 specifies special modifications to
a Value.
Each of the n-refs integers is a reference to a
Footnote by a 0-based index. Footnote markers are
shown appended to the main text of the Value, as superscripts or
subscripts.
The subscripts, if present, are strings to append to the main text
of the Value, as subscripts. Each subscript text is a brief indicator,
e.g. a or b, with its meaning indicated by the table caption. When
multiple subscripts are present, they are displayed separated by commas.
The id inside the TemplateString, if present, is a template string
for substitutions using the syntax explained previously. It appears
to be an English-language version of the localized template string in
the Value in which the Template is nested. A writer may safely omit
the optional fixed data in TemplateString.
FontStyle and CellStyle, if present, change the style for this
individual Value. In FontStyle, bold, italic, and underline
control the particular style. show is ordinarily 1; if it is 0,
then the cell data is not shown. fg-color and bg-color are
strings in the format #rrggbb, e.g. #ff0000 for red or #ffffff
for white. The empty string is occasionally observed also. The
size is a font size in units of 1/128 inch.
In CellStyle, halign specified horizontal alignment:
halign | Meaning |
|---|---|
| 0 | Center |
| 2 | Left |
| 4 | Right |
| 6 | Decimal |
| 0xffffffad | Mixed |
For decimal alignment, decimal-offset is the decimal point's offset
from the right side of the cell, in pt.
valign specifies vertical alignment:
valign | Meaning |
|---|---|
| 0 | Center |
| 1 | Top |
| 3 | Bottom |
left-margin, right-margin, top-margin, and bottom-margin are
in pt.
Legacy Detail Member Binary Format
Whereas the light binary format represents everything about a given pivot table, the legacy binary format conceptually consists of a number of named sources, each of which consists of a number of named variables, each of which is a 1-dimensional array of numbers or strings or a mix. Thus, the legacy binary member format is quite simple.
This section uses the same context-free grammar notation as in the previous section, with the following additions:
-
vAF(X)
In a version 0xaf legacy member,X; in other versions, nothing. (The legacy member header indicates the version; see below.) -
vB0(X)
In a version 0xb0 legacy member,X; in other versions, nothing.
A legacy detail member .bin has the following overall format:
LegacyBinary =>
00 byte[version] int16[n-sources] int32[member-size]
Metadata*[n-sources]
#Data*[n-sources]
#Strings?
version is a version number that affects the interpretation of some
of the other data in the member. Versions 0xaf and 0xb0 are known. We
will refer to "version 0xaf" and "version 0xb0" members later on.
A legacy member consists of n-sources data sources, each of which
has Metadata and Data.
member-size is the size of the legacy binary member, in bytes.
The Data and Strings above are commented out because the Metadata has some oddities that mean that the Data sometimes seems to start at an unexpected place. The following section goes into detail.
Metadata
Metadata =>
int32[n-values] int32[n-variables] int32[data-offset]
vAF(byte*28[source-name])
vB0(byte*64[source-name] int32[x])
A data source has n-variables variables, each with n-values data
values.
source-name is a 28- or 64-byte string padded on the right with
0-bytes. The names that appear in the corpus are very generic: usually
tableData for pivot table data or source0 for chart data.
A given Metadata's data-offset is the offset, in bytes, from the
beginning of the member to the start of the corresponding Data. This
allows programs to skip to the beginning of the data for a particular
source. In every case in the corpus, the Data follow the Metadata in
the same order, but it is important to use data-offset instead of
reading sequentially through the file because of the exception described
below.
One SPV file in the corpus has legacy binary members with version
0xb0 but a 28-byte source-name field (and only a single source). In
practice, this means that the 64-byte source-name used in version 0xb0
has a lot of 0-bytes in the middle followed by the variable-name of
the following Data. As long as a reader treats the first 0-byte in the
source-name as terminating the string, it can properly interpret these
members.
The meaning of x in version 0xb0 is unknown.
Numeric Data
Data => Variable*[n-variables]
Variable => byte*288[variable-name] double*[n-values]
Data follow the Metadata in the legacy binary format, with sources
in the same order (but readers should use the data-offset in
Metadata records, rather than reading sequentially). Each Variable
begins with a variable-name that generally indicates its role in the
pivot table, e.g. "cell", "cellFormat", "dimension0categories",
"dimension0group0", followed by the numeric data, one double per
datum. A double with the maximum negative double -DBL_MAX
represents the system-missing value SYSMIS.
String Data
Strings => SourceMaps[maps] Labels
SourceMaps => int32[n-maps] SourceMap*[n-maps]
SourceMap => string[source-name] int32[n-variables] VariableMap*[n-variables]
VariableMap => string[variable-name] int32[n-data] DatumMap*[n-data]
DatumMap => int32[value-idx] int32[label-idx]
Labels => int32[n-labels] Label*[n-labels]
Label => int32[frequency] string[label]
Each variable may include a mix of numeric and string data values.
If a legacy binary member contains any string data, Strings is present;
otherwise, it ends just after the last Data element.
The string data overlays the numeric data. When a variable includes
any string data, its Variable represents the string values with a
SYSMIS or NaN placeholder. (Not all such values need be
placeholders.)
Each SourceMap provides a mapping between SYSMIS or NaN values in
source source-name and the string data that they represent.
n-variables is the number of variables in the source that include
string data. More precisely, it is the 1-based index of the last
variable in the source that includes any string data; thus, it would
be 4 if there are 5 variables and only the fourth one includes string
data.
A VariableMap repeats its variable's name, but variables are always
present in the same order as the source, starting from the first
variable, without skipping any even if they have no string values.
Each VariableMap contains DatumMap nonterminals, each of which
maps from a 0-based index within its variable's data to a 0-based
label index, e.g. pair value-idx = 2, label-idx = 3, means that
the third data value (which must be SYSMIS or NaN) is to be replaced
by the string of the fourth Label.
The labels themselves follow the pairs. The valuable part of each
label is the string label. Each label also includes a frequency
that reports the number of DatumMaps that reference it (although
this is not useful).
Legacy Detail XML Member Format
The design of the detail XML format is not what one would end up with for describing pivot tables. This is because it is a special case of a much more general format ("visualization XML" or "VizML") that can describe a wide range of visualizations. Most of this generality is overkill for tables, and so we end up with a funny subset of a general-purpose format.
An XML Schema for VizML is available, distributed with SPSS binaries, under a nonfree license. It contains documentation that is occasionally helpful.
This section describes the detail XML format using the same notation
already used for the structure XML format. See
src/output/spv/detail-xml.grammar in the PSPP source tree for the
full grammar that it uses for parsing.
The important elements of the detail XML format are:
-
Assignment of variables to axes. A variable can appear as columns, or rows, or layers. The
facetingelement and its sub-elements describe this assignment. -
Styles and other annotations.
This description is not detailed enough to write legacy tables. Instead, write tables in the light binary format.
- The
visualizationElement - Variable Elements
- The
extensionElement - The
graphElement - The
locationElement - The
facetingElement - The
facetLayoutElement - The
labelElement - The
setCellPropertiesElement - The
setFormatElement - The
intervalElement - The
styleElement - The
labelFrameElement - Legacy Properties
The visualization Element
visualization
:creator
:date
:lang
:name
:style[style_ref]=ref style
:type
:version
:schemaLocation?
=> visualization_extension?
userSource
(sourceVariable | derivedVariable)+
categoricalDomain?
graph
labelFrame[lf1]*
container?
labelFrame[lf2]*
style+
layerController?
extension[visualization_extension]
:numRows=int?
:showGridline=bool?
:minWidthSet=(true)?
:maxWidthSet=(true)?
=> EMPTY
userSource :missing=(listwise | pairwise)? => EMPTY
categoricalDomain => variableReference simpleSort
simpleSort :method[sort_method]=(custom) => categoryOrder
container :style=ref style => container_extension? location+ labelFrame*
extension[container_extension] :combinedFootnotes=(true) => EMPTY
layerController
:source=(tableData)
:target=ref label?
=> EMPTY
The visualization element is the root of detail XML member. It has
the following attributes:
-
creator
The version of the software that created this SPV file, as a string of the formxxyyzz, which represents software version xx.yy.zz, e.g.160001is version 16.0.1. The corpus includes major versions 16 through 19. -
date
The date on the which the file was created, as a string of the formYYYY-MM-DD. -
lang
The locale used for output, in Windows format, which is similar to the format used in Unix with the underscore replaced by a hyphen, e.g.en-US,en-GB,el-GR,sr-Cryl-RS. -
name
The title of the pivot table, localized to the output language. -
style
The base style for the pivot table. In every example in the corpus, thestyleelement has no attributes other thanid. -
type
A floating-point number. The meaning is unknown. -
version
The visualization schema version number. In the corpus, the value is one of 2.4, 2.5, 2.7, and 2.8.
The userSource element has no visible effect.
The extension element as a child of visualization has the
following attributes.
-
numRows
An integer that presumably defines the number of rows in the displayed pivot table. -
showGridline
Always set tofalsein the corpus. -
minWidthSet -
maxWidthSet
Always set totruein the corpus.
The extension element as a child of container has the following
attribute
combinedFootnotes
Meaning unknown.
The categoricalDomain and simpleSort elements have no visible
effect.
The layerController element has no visible effect.
Variable Elements
A "variable" in detail XML is a 1-dimensional array of data. Each element of the array may, independently, have string or numeric content. All of the variables in a given detail XML member either have the same number of elements or have zero elements.
Two different elements define variables and their content:
-
sourceVariable
These variables' data comes from the associatedtableData.binmember. -
derivedVariable
These variables are defined in terms of a mapping function from a source variable, or they are empty.
A variable named cell always exists. This variable holds the data
displayed in the table.
Variables in detail XML roughly correspond to the dimensions in a light detail member. Each dimension has the following variables with stylized names, where N is a number for the dimension starting from 0:
-
dimensionNcategories
The dimension's leaf categories. -
dimensionNgroup0
Present only if the dimension's categories are grouped, this variable holds the group labels for the categories. Grouping is inferred through adjacent identical labels. Categories that are not part of a group have empty-string data in this variable. -
dimensionNgroup1
Present only if the first-level groups are further grouped, this variable holds the labels for the second-level groups. There can be additional variables with further levels of grouping. -
dimensionN
An empty variable.Determining the data for a (non-empty) variable is a multi-step process:
-
Draw initial data from its source, for a
sourceVariable, or from another named variable, for aderivedVariable. -
Apply mappings from
valueMapEntryelements within thederivedVariableelement, if any. -
Apply mappings from
relabelelements within aformatorstringFormatelement in thesourceVariableorderivedVariableelement, if any. -
If the variable is a
sourceVariablewith alabelVariableattribute, and there were no mappings to apply in previous steps, then replace each element of the variable by the corresponding value in the label variable.
A single variable's data can be modified in two of the steps, if both
valueMapEntry and relabel are used. The following example from
the corpus maps several integers to 2, then maps 2 in turn to the
string "Input":
<derivedVariable categorical="true" dependsOn="dimension0categories"
id="dimension0group0map" value="map(dimension0group0)">
<stringFormat>
<relabel from="2" to="Input"/>
<relabel from="10" to="Missing Value Handling"/>
<relabel from="14" to="Resources"/>
<relabel from="0" to=""/>
<relabel from="1" to=""/>
<relabel from="13" to=""/>
</stringFormat>
<valueMapEntry from="2;3;5;6;7;8;9" to="2"/>
<valueMapEntry from="10;11" to="10"/>
<valueMapEntry from="14;15" to="14"/>
<valueMapEntry from="0" to="0"/>
<valueMapEntry from="1" to="1"/>
<valueMapEntry from="13" to="13"/>
</derivedVariable>
The sourceVariable Element
sourceVariable
:id
:categorical=(true)
:source
:domain=ref categoricalDomain?
:sourceName
:dependsOn=ref sourceVariable?
:label?
:labelVariable=ref sourceVariable?
=> variable_extension* (format | stringFormat)?
This element defines a variable whose data comes from the
tableData.bin member that corresponds to this .xml.
This element has the following attributes.
-
id
Anidis always present because this element exists to be referenced from other elements. -
categorical
Always set totrue. -
source
Always set totableData, thesource-namein the correspondingtableData.binmember (see Metadata). -
sourceName
The name of a variable within the source, corresponding to thevariable-namein thetableData.binmember (see Numeric Data). -
label
The variable label, if any. -
labelVariable
Thevariable-nameof a variable whose string values correspond one-to-one with the values of this variable and are suitable for use as value labels. -
dependsOn
This attribute doesn't affect the display of a table.
The derivedVariable Element
derivedVariable
:id
:categorical=(true)
:value
:dependsOn=ref sourceVariable?
=> variable_extension* (format | stringFormat)? valueMapEntry*
Like sourceVariable, this element defines a variable whose values
can be used elsewhere in the visualization. Instead of being read from
a data source, the variable's data are defined by a mathematical
expression.
This element has the following attributes.
-
id
Anidis always present because this element exists to be referenced from other elements. -
categorical
Always set totrue. -
value
An expression that defines the variable's value. In theory this could be an arbitrary expression in terms of constants, functions, and other variables, e.g. (VAR1 + VAR2) / 2. In practice, the corpus contains only the following forms of expressions:-
constant(0)
constant(VARIABLE)
All zeros. The reason why a variable is sometimes named is unknown. Sometimes the "variable name" has spaces in it. -
map(VARIABLE)
Transforms the values in the named VARIABLE using thevalueMapEntrys contained within the element.
-
-
dependsOn
This attribute doesn't affect the display of a table.
The valueMapEntry Element
valueMapEntry :from :to => EMPTY
A valueMapEntry element defines a mapping from one or more values
of a source expression to a target value. (In the corpus, the source
expression is always just the name of a variable.) Each target value
requires a separate valueMapEntry. If multiple source values map to
the same target value, they can be combined or separate.
In the corpus, all of the source and target values are integers.
valueMapEntry has the following attributes.
-
from
A source value, or multiple source values separated by semicolons, e.g.0or13;14;15;16. -
to
The target value, e.g.0.
The extension Element
This is a general-purpose "extension" element. Readers that don't understand a given extension should be able to safely ignore it. The attributes on this element, and their meanings, vary based on the context. Each known usage is described separately below. The current extensions use attributes exclusively, without any nested elements.
container Parent Element
extension[container_extension] :combinedFootnotes=(true) => EMPTY
With container as its parent element, extension has the following
attributes.
combinedFootnotes
Always set totruein the corpus.
sourceVariable and derivedVariable Parent Element
extension[variable_extension] :from :helpId => EMPTY
With sourceVariable or derivedVariable as its parent element,
extension has the following attributes. A given parent element
often contains several extension elements that specify the meaning
of the source data's variables or sources, e.g.
<extension from="0" helpId="corrected_model"/>
<extension from="3" helpId="error"/>
<extension from="4" helpId="total_9"/>
<extension from="5" helpId="corrected_total"/>
More commonly they are less helpful, e.g.
<extension from="0" helpId="notes"/>
<extension from="1" helpId="notes"/>
<extension from="2" helpId="notes"/>
<extension from="5" helpId="notes"/>
<extension from="6" helpId="notes"/>
<extension from="7" helpId="notes"/>
<extension from="8" helpId="notes"/>
<extension from="12" helpId="notes"/>
<extension from="13" helpId="no_help"/>
<extension from="14" helpId="notes"/>
-
from
An integer or a name like "dimension0". -
helpId
An identifier.
The graph Element
graph
:cellStyle=ref style
:style=ref style
=> location+ coordinates faceting facetLayout interval
coordinates => EMPTY
graph has the following attributes.
cellStyle
style
Each of these is theidof astyleelement. The former is the default style for individual cells, the latter for the entire table.
The location Element
location
:part=(height | width | top | bottom | left | right)
:method=(sizeToContent | attach | fixed | same)
:min=dimension?
:max=dimension?
:target=ref (labelFrame | graph | container)?
:value?
=> EMPTY
Each instance of this element specifies where some part of the table
frame is located. All the examples in the corpus have four instances
of this element, one for each of the parts height, width, left,
and top. Some examples in the corpus add a fifth for part bottom,
even though it is not clear how all of top, bottom, and height
can be honored at the same time. In any case, location seems to
have little importance in representing tables; a reader can safely
ignore it.
-
part
The part of the table being located. -
method
How the location is determined:-
sizeToContent
Based on the natural size of the table. Observed only for partsheightandwidth. -
attach
Based on the location specified intarget. Observed only for partstopandbottom. -
fixed
Using the value invalue. Observed only for partstop,bottom, andleft. -
same
Same as the specifiedtarget. Observed only for partleft.
-
-
min
Minimum size. Only observed with value100pt. Only observed for partwidth. -
target
Required whenmethodisattachorsame, not observed otherwise. This identifies an element to attach to. Observed with the ID oftitle,footnote,graph, and other elements. -
value
Required whenmethodisfixed, not observed otherwise. Observed values are0%,0px,1px, and3pxon partstopandleft, and100%on partbottom.
The faceting Element
faceting => layer[layers1]* cross layer[layers2]*
cross => (unity | nest) (unity | nest)
unity => EMPTY
nest => variableReference[vars]+
variableReference :ref=ref (sourceVariable | derivedVariable) => EMPTY
layer
:variable=ref (sourceVariable | derivedVariable)
:value
:visible=bool?
:method[layer_method]=(nest)?
:titleVisible=bool?
=> EMPTY
The faceting element describes the row, column, and layer structure
of the table. Its cross child determines the row and column
structure, and each layer child (if any) represents a layer. Layers
may appear before or after cross.
The cross element describes the row and column structure of the
table. It has exactly two children, the first of which describes the
table's columns and the second the table's rows. Each child is a nest
element if the table has any dimensions along the axis in question,
otherwise a unity element.
A nest element contains of one or more dimensions listed from
innermost to outermost, each represented by variableReference child
elements. Each variable in a dimension is listed in order. See
Variable Elements, for information on the
variables that comprise a dimension.
A nest can contain a single dimension, e.g.:
<nest>
<variableReference ref="dimension0categories"/>
<variableReference ref="dimension0group0"/>
<variableReference ref="dimension0"/>
</nest>
A nest can contain multiple dimensions, e.g.:
<nest>
<variableReference ref="dimension1categories"/>
<variableReference ref="dimension1group0"/>
<variableReference ref="dimension1"/>
<variableReference ref="dimension0categories"/>
<variableReference ref="dimension0"/>
</nest>
A nest may have no dimensions, in which case it still has one
variableReference child, which references a derivedVariable whose
value attribute is constant(0). In the corpus, such a
derivedVariable has row or column, respectively, as its id.
This is equivalent to using a unity element in place of nest.
A variableReference element refers to a variable through its ref
attribute.
Each layer element represents a dimension, e.g.:
<layer value="0" variable="dimension0categories" visible="true"/>
<layer value="dimension0" variable="dimension0" visible="false"/>
layer has the following attributes.
-
variable
Refers to asourceVariableorderivedVariableelement. -
value
The value to select. For a category variable, this is always0; for a data variable, it is the same as thevariableattribute. -
visible
Whether the layer is visible. Generally, category layers are visible and data layers are not, but sometimes this attribute is omitted. -
method
When present, this is alwaysnest.
The facetLayout Element
facetLayout => tableLayout setCellProperties[scp1]*
facetLevel+ setCellProperties[scp2]*
tableLayout
:verticalTitlesInCorner=bool
:style=ref style?
:fitCells=(ticks both)?
=> EMPTY
The facetLayout element and its descendants control styling for the
table.
Its tableLayout child has the following attributes
-
verticalTitlesInCorner
If true, in the absence of corner text, row headings will be displayed in the corner. -
style
Refers to astyleelement. -
fitCells
Meaning unknown.
The facetLevel Element
facetLevel :level=int :gap=dimension? => axis
axis :style=ref style => label? majorTicks
majorTicks
:labelAngle=int
:length=dimension
:style=ref style
:tickFrameStyle=ref style
:labelFrequency=int?
:stagger=bool?
=> gridline?
gridline
:style=ref style
:zOrder=int
=> EMPTY
Each facetLevel describes a variableReference or layer, and a
table has one facetLevel element for each such element. For example,
an SPV detail member that contains four variableReference elements and
two layer elements will contain six facetLevel elements.
In the corpus, facetLevel elements and the elements that they
describe are always in the same order. The correspondence may also be
observed in two other ways. First, one may use the level attribute,
described below. Second, in the corpus, a facetLevel always has an
id that is the same as the id of the element it describes with
_facetLevel appended. One should not formally rely on this, of
course, but it is usefully indicative.
-
level
A 1-based index into thevariableReferenceandlayerelements, e.g. afacetLayoutwith alevelof 1 describes the firstvariableReferencein the SPV detail member, and in a member with fourvariableReferenceelements, afacetLayoutwith alevelof 5 describes the firstlayerin the member. -
gap
Always observed as0pt.
Each facetLevel contains an axis, which in turn may contain a
label for the facetLevel and does contain a
majorTicks element.
-
labelAngle
Normally 0. The value -90 causes inner column or outer row labels to be rotated vertically. -
style -
tickFrameStyle
Each refers to astyleelement.styleis the style of the tick labels,tickFrameStylethe style for the frames around the labels.
The label Element
label
:style=ref style
:textFrameStyle=ref style?
:purpose=(title | subTitle | subSubTitle | layer | footnote)?
=> text+ | descriptionGroup
descriptionGroup
:target=ref faceting
:separator?
=> (description | text)+
description :name=(variable | value) => EMPTY
text
:usesReference=int?
:definesReference=int?
:position=(subscript | superscript)?
:style=ref style
=> TEXT
This element represents a label on some aspect of the table.
-
style
textFrameStyle
Each of these refers to astyleelement.styleis the style of the label text,textFrameStylethe style for the frame around the label. -
purpose
The kind of entity being labeled.
A descriptionGroup concatenates one or more elements to form a
label. Each element can be a text element, which contains literal
text, or a description element that substitutes a value or a variable
name.
-
target
Theidof an element being described. In the corpus, this is alwaysfaceting. -
separator
A string to separate the description of multiple groups, if thetargethas more than one. In the corpus, this is always a new-line.
Typical contents for a descriptionGroup are a value by itself:
<description name="value"/>
or a variable and its value, separated by a colon:
<description name="variable"/><text>:</text><description name="value"/>
A description is like a macro that expands to some property of the
target of its parent descriptionGroup. The name attribute specifies
the property.
The setCellProperties Element
setCellProperties
:applyToConverse=bool?
=> (setStyle | setFrameStyle | setFormat | setMetaData)* union[union_]?
The setCellProperties element sets style properties of cells or row
or column labels.
Interpreting setCellProperties requires answering two questions:
which cells or labels to style, and what styles to use.
Which Cells?
union => intersect+
intersect => where+ | intersectWhere | alternating | EMPTY
where
:variable=ref (sourceVariable | derivedVariable)
:include
=> EMPTY
intersectWhere
:variable=ref (sourceVariable | derivedVariable)
:variable2=ref (sourceVariable | derivedVariable)
=> EMPTY
alternating => EMPTY
When union is present with intersect children, each of those
children specifies a group of cells that should be styled, and the total
group is all those cells taken together. When union is absent, every
cell is styled. One attribute on setCellProperties affects the choice
of cells:
applyToConverse
If true, this inverts the meaning of the cell selection: the selected cells are the ones not designated. This is confusing, given the additional restrictions ofunion, but in the corpusapplyToConverseis never present along withunion.
An intersect specifies restrictions on the cells to be matched.
Each where child specifies which values of a given variable to
include. The attributes of intersect are:
-
variable
Refers to a variable, e.g.dimension0categories. Only "categories" variables make sense here, but other variables, e.g.dimension0group0map, are sometimes seen. The reader may ignore these. -
include
A value, or multiple values separated by semicolons, e.g.0or13;14;15;16.
PSPP ignores setCellProperties when intersectWhere is present.
What Styles?
setStyle
:target=ref (labeling | graph | interval | majorTicks)
:style=ref style
=> EMPTY
setMetaData :target=ref graph :key :value => EMPTY
setFormat
:target=ref (majorTicks | labeling)
:reset=bool?
=> format | numberFormat | stringFormat+ | dateTimeFormat | elapsedTimeFormat
setFrameStyle
:style=ref style
:target=ref majorTicks
=> EMPTY
The set* children of setCellProperties determine the styles to
set.
When setCellProperties contains a setFormat whose target
references a labeling element, or if it contains a setStyle that
references a labeling or interval element, the setCellProperties
sets the style for table cells. The format from the setFormat, if
present, replaces the cells' format. The style from the setStyle that
references labeling, if present, replaces the label's font and cell
styles, except that the background color is taken instead from the
interval's style, if present.
When setCellProperties contains a setFormat whose target
references a majorTicks element, or if it contains a setStyle whose
target references a majorTicks, or if it contains a setFrameStyle
element, the setCellProperties sets the style for row or column
labels. In this case, the setCellProperties always contains a single
where element whose variable designates the variable whose labels
are to be styled. The format from the setFormat, if present, replaces
the labels' format. The style from the setStyle that references
majorTicks, if present, replaces the labels' font and cell styles,
except that the background color is taken instead from the
setFrameStyle's style, if present.
When setCellProperties contains a setStyle whose target
references a graph element, and one that references a labeling
element, and the union element contains alternating, the
setCellProperties sets the alternate foreground and background colors
for the data area. The foreground color is taken from the style
referenced by the setStyle that targets the graph, the background
color from the setStyle for labeling.
A reader may ignore a setCellProperties that only contains
setMetaData, as well as setMetaData within other
setCellProperties.
A reader may ignore a setCellProperties whose only set* child is
a setStyle that targets the graph element.
The setStyle Element
setStyle
:target=ref (labeling | graph | interval | majorTicks)
:style=ref style
=> EMPTY
This element associates a style with the target.
-
target
Theidof an element whose style is to be set. -
style
Theidof astyleelement that identifies the style to set on the target.
The setFormat Element
setFormat
:target=ref (majorTicks | labeling)
:reset=bool?
=> format | numberFormat | stringFormat+ | dateTimeFormat | elapsedTimeFormat
This element sets the format of the target, "format" in this case meaning the SPSS print format for a variable.
The details of this element vary depending on the schema version, as
declared in the root visualization
element's version attribute. A reader
can interpret the content without knowing the schema version.
The setFormat element itself has the following attributes.
-
target
Refers to an element whose style is to be set. -
reset
If this istrue, this format replaces the target's previous format. If it isfalse, the modifies the previous format.
The numberFormat Element
numberFormat
:minimumIntegerDigits=int?
:maximumFractionDigits=int?
:minimumFractionDigits=int?
:useGrouping=bool?
:scientific=(onlyForSmall | whenNeeded | true | false)?
:small=real?
:prefix?
:suffix?
=> affix*
Specifies a format for displaying a number. The available options
are a superset of those available from PSPP print formats. PSPP chooses
a print format type for a numberFormat as follows:
-
If
scientificistrue, usesEformat. -
If
prefixis$, usesDOLLARformat. -
If
suffixis%, usesPCTformat. -
If
useGroupingistrue, usesCOMMAformat. -
Otherwise, uses
Fformat.
For translating to a print format, PSPP uses maximumFractionDigits
as the number of decimals, unless that attribute is missing or out of
the range [0,15], in which case it uses 2 decimals.
-
minimumIntegerDigits
Minimum number of digits to display before the decimal point. Always observed as0. -
maximumFractionDigits
minimumFractionDigits
Maximum or minimum, respectively, number of digits to display after the decimal point. The observed values of each attribute range from 0 to 9. -
useGrouping
Whether to use the grouping character to group digits in large numbers. -
scientific
This attribute controls when and whether the number is formatted in scientific notation. It takes the following values:-
onlyForSmall
Use scientific notation only when the number's magnitude is smaller than the value of thesmallattribute. -
whenNeeded
Use scientific notation when the number will not otherwise fit in the available space. -
true
Always use scientific notation. Not observed in the corpus. -
false
Never use scientific notation. A number that won't otherwise fit will be replaced by an error indication (see theerrorCharacterattribute). Not observed in the corpus.
-
-
small
Only present when thescientificattribute isonlyForSmall, this is a numeric magnitude below which the number will be formatted in scientific notation. The values0and0.0001have been observed. The value0seems like a pathological choice, since no real number has a magnitude less than 0; perhaps in practice such a choice is equivalent to settingscientifictofalse. -
prefix
suffix
Specifies a prefix or a suffix to apply to the formatted number. Onlysuffixhas been observed, with value%.
The stringFormat Element
stringFormat => relabel* affix*
relabel :from=real :to => EMPTY
The stringFormat element specifies how to display a string. By
default, a string is displayed verbatim, but relabel can change it.
The relabel element appears as a child of stringFormat (and of
format, when it is used to format strings). It specifies how to
display a given value. It is used to implement value labels and to
display the system-missing value in a human-readable way. It has the
following attributes:
-
from
The value to map. In the corpus this is an integer or the system-missing value-1.797693134862316E300. -
to
The string to display in place of the value offrom. In the corpus this is a wide variety of value labels; the system-missing value is mapped to..
The dateTimeFormat Element
dateTimeFormat
:baseFormat[dt_base_format]=(date | time | dateTime)
:separatorChars?
:mdyOrder=(dayMonthYear | monthDayYear | yearMonthDay)?
:showYear=bool?
:yearAbbreviation=bool?
:showQuarter=bool?
:quarterPrefix?
:quarterSuffix?
:showMonth=bool?
:monthFormat=(long | short | number | paddedNumber)?
:showWeek=bool?
:weekPadding=bool?
:weekSuffix?
:showDayOfWeek=bool?
:dayOfWeekAbbreviation=bool?
:dayPadding=bool?
:dayOfMonthPadding=bool?
:hourPadding=bool?
:minutePadding=bool?
:secondPadding=bool?
:showDay=bool?
:showHour=bool?
:showMinute=bool?
:showSecond=bool?
:showMillis=bool?
:dayType=(month | year)?
:hourFormat=(AMPM | AS_24 | AS_12)?
=> affix*
This element appears only in schema version 2.5 and earlier.
Data to be formatted in date formats is stored as strings in legacy
data, in the format yyyy-mm-ddTHH:MM:SS.SSS and must be parsed and
reformatted by the reader.
The following attribute is required.
baseFormat
Specifies whether a date and time are both to be displayed, or just one of them.
Many of the attributes' meanings are obvious. The following seem to be worth documenting.
-
separatorChars
Exactly four characters. In order, these are used for: decimal point, grouping, date separator, time separator. Always.,-:. -
mdyOrder
Within a date, the order of the days, months, and years.dayMonthYearis the only observed value, but one would expect thatmonthDayYearandyearMonthDayto be reasonable as well. -
showYear -
yearAbbreviation
Whether to include the year and, if so, whether the year should be shown abbreviated, that is, with only 2 digits. Each istrueorfalse; only values oftrueandfalse, respectively, have been observed. -
showMonth -
monthFormat
Whether to include the month (trueorfalse) and, if so, how to format it.monthFormatis one of the following:-
long
The full name of the month, e.g. in an English locale,September. -
short
The abbreviated name of the month, e.g. in an English locale,Sep. -
number
The number representing the month, e.g. 9 for September. -
paddedNumber
A two-digit number representing the month, e.g. 09 for September.
Only values of
trueandshort, respectively, have been observed. -
-
dayType
This attribute is alwaysmonthin the corpus, specifying that the day of the month is to be displayed; a value ofyearis supposed to indicate that the day of the year, where 1 is January 1, is to be displayed instead. -
hourFormat
hourFormat, if present, is one of:-
AMPM
The time is displayed with anamorpmsuffix, e.g.10:15pm. -
AS_24
The time is displayed in a 24-hour format, e.g.22:15.This is the only value observed in the corpus.
-
AS_12
The time is displayed in a 12-hour format, without distinguishing morning or evening, e.g.10;15.
hourFormatis sometimes present forelapsedTimeformats, which is confusing since a time duration does not have a concept of AM or PM. This might indicate a bug in the code that generated the XML in the corpus, or it might indicate thatelapsedTimeis sometimes used to format a time of day. -
For a baseFormat of date, PSPP chooses a print format type based
on the following rules:
-
If
showQuarteris true:QYR. -
Otherwise, if
showWeekis true:WKYR. -
Otherwise, if
mdyOrderisdayMonthYear:a. If
monthFormatisnumberorpaddedNumber:EDATE.b. Otherwise:
DATE. -
Otherwise, if
mdyOrderisyearMonthDay:SDATE. -
Otherwise,
ADATE.
For a baseFormat of dateTime, PSPP uses YMDHMS if mdyOrder is
yearMonthDay and DATETIME otherwise. For a baseFormat of time,
PSPP uses DTIME if showDay is true, otherwise TIME if showHour
is true, otherwise MTIME.
For a baseFormat of date, the chosen width is the minimum for the
format type, adding 2 if yearAbbreviation is false or omitted. For
other base formats, the chosen width is the minimum for its type, plus 3
if showSecond is true, plus 4 more if showMillis is also true.
Decimals are 0 by default, or 3 if showMillis is true.
The elapsedTimeFormat Element
elapsedTimeFormat
:baseFormat[dt_base_format]=(date | time | dateTime)
:dayPadding=bool?
:hourPadding=bool?
:minutePadding=bool?
:secondPadding=bool?
:showYear=bool?
:showDay=bool?
:showHour=bool?
:showMinute=bool?
:showSecond=bool?
:showMillis=bool?
=> affix*
This element specifies the way to display a time duration.
Data to be formatted in elapsed time formats is stored as strings in
legacy data, in the format H:MM:SS.SSS, with additional hour digits as
needed for long durations, and must be parsed and reformatted by the
reader.
The following attribute is required.
baseFormat
Specifies whether a day and a time are both to be displayed, or just one of them.
The remaining attributes specify exactly how to display the elapsed time.
For baseFormat of time, PSPP converts this element to print
format type DTIME; otherwise, if showHour is true, to TIME;
otherwise, to MTIME. The chosen width is the minimum for the chosen
type, adding 3 if showSecond is true, adding 4 more if showMillis is
also true. Decimals are 0 by default, or 3 if showMillis is true.
The format Element
format
:baseFormat[f_base_format]=(date | time | dateTime | elapsedTime)?
:errorCharacter?
:separatorChars?
:mdyOrder=(dayMonthYear | monthDayYear | yearMonthDay)?
:showYear=bool?
:showQuarter=bool?
:quarterPrefix?
:quarterSuffix?
:yearAbbreviation=bool?
:showMonth=bool?
:monthFormat=(long | short | number | paddedNumber)?
:dayPadding=bool?
:dayOfMonthPadding=bool?
:showWeek=bool?
:weekPadding=bool?
:weekSuffix?
:showDayOfWeek=bool?
:dayOfWeekAbbreviation=bool?
:hourPadding=bool?
:minutePadding=bool?
:secondPadding=bool?
:showDay=bool?
:showHour=bool?
:showMinute=bool?
:showSecond=bool?
:showMillis=bool?
:dayType=(month | year)?
:hourFormat=(AMPM | AS_24 | AS_12)?
:minimumIntegerDigits=int?
:maximumFractionDigits=int?
:minimumFractionDigits=int?
:useGrouping=bool?
:scientific=(onlyForSmall | whenNeeded | true | false)?
:small=real?
:prefix?
:suffix?
:tryStringsAsNumbers=bool?
:negativesOutside=bool?
=> relabel* affix*
This element is the union of all of the more-specific format
elements. It is interpreted in the same way as one of those format
elements, using baseFormat to determine which kind of format to use.
There are a few attributes not present in the more specific formats:
-
tryStringsAsNumbers
When this istrue, it is supposed to indicate that string values should be parsed as numbers and then displayed according to numeric formatting rules. However, in the corpus it is alwaysfalse. -
negativesOutside
If true, the negative sign should be shown before the prefix; if false, it should be shown after.
The affix Element
affix
:definesReference=int
:position=(subscript | superscript)
:suffix=bool
:value
=> EMPTY
This defines a suffix (or, theoretically, a prefix) for a formatted value. It is used to insert a reference to a footnote. It has the following attributes:
-
definesReference
This specifies the footnote number as a natural number: 1 for the first footnote, 2 for the second, and so on. -
position
Position for the footnote label. Alwayssuperscript. -
suffix
Whether the affix is a suffix (true) or a prefix (false). Alwaystrue. -
value
The text of the suffix or prefix. Typically a letter, e.g.afor footnote 1,bfor footnote 2, ... The corpus contains other values:*,**, and a few that begin with at least one comma:,b,,c,,,b, and,,c.
The interval Element
interval :style=ref style => labeling footnotes?
labeling
:style=ref style?
:variable=ref (sourceVariable | derivedVariable)
=> (formatting | format | footnotes)*
formatting :variable=ref (sourceVariable | derivedVariable) => formatMapping*
formatMapping :from=int => format?
footnotes
:superscript=bool?
:variable=ref (sourceVariable | derivedVariable)
=> footnoteMapping*
footnoteMapping :definesReference=int :from=int :to => EMPTY
The interval element and its descendants determine the basic
formatting and labeling for the table's cells. These basic styles are
overridden by more specific styles set using
setCellProperties.
The style attribute of interval itself may be ignored.
The labeling element may have a single formatting child. If
present, its variable attribute refers to a variable whose values are
format specifiers as numbers, e.g. value 0x050802 for F8.2. However,
the numbers are not actually interpreted that way. Instead, each number
actually present in the variable's data is mapped by a formatMapping
child of formatting to a format that specifies how to display it.
The labeling element may also have a footnotes child element.
The variable attribute of this element refers to a variable whose
values are comma-delimited strings that list the 1-based indexes of
footnote references. (Cells without any footnote references are numeric
0 instead of strings.)
Each footnoteMapping child of the footnotes element defines the
footnote marker to be its to attribute text for the footnote whose
1-based index is given in its definesReference attribute.
The style Element
style
:color=color?
:color2=color?
:labelAngle=real?
:border-bottom=(solid | thick | thin | double | none)?
:border-top=(solid | thick | thin | double | none)?
:border-left=(solid | thick | thin | double | none)?
:border-right=(solid | thick | thin | double | none)?
:border-bottom-color?
:border-top-color?
:border-left-color?
:border-right-color?
:font-family?
:font-size?
:font-weight=(regular | bold)?
:font-style=(regular | italic)?
:font-underline=(none | underline)?
:margin-bottom=dimension?
:margin-left=dimension?
:margin-right=dimension?
:margin-top=dimension?
:textAlignment=(left | right | center | decimal | mixed)?
:labelLocationHorizontal=(positive | negative | center)?
:labelLocationVertical=(positive | negative | center)?
:decimal-offset=dimension?
:size?
:width?
:visible=bool?
=> EMPTY
A style element has an effect only when it is referenced by another
element to set some aspect of the table's style. Most of the attributes
are self-explanatory. The rest are described below.
-
color
In some cases, the text color; in others, the background color. -
color2
Not used. -
labelAngle
Normally 0. The value -90 causes inner column or outer row labels to be rotated vertically. -
labelLocationHorizontal
Not used. -
labelLocationVertical
The valuepositivecorresponds to vertically aligning text to the top of a cell,negativeto the bottom,centerto the middle.
The labelFrame Element
labelFrame :style=ref style => location+ label? paragraph?
paragraph :hangingIndent=dimension? => EMPTY
A labelFrame element specifies content and style for some aspect of
a table. Only labelFrame elements that have a label child are
important. The purpose attribute in the label determines what the
labelFrame affects:
-
title
The table's title and its style. -
subTitle
The table's caption and its style. -
footnote
The table's footnotes and the style for the footer area. -
layer
The style for the layer area. -
subSubTitle
Ignored.
The style attribute references the style to use for the area.
The label, if present, specifies the text to put into the title or
caption or footnotes. For footnotes, the label has two text children
for every footnote, each of which has a usesReference attribute
identifying the 1-based index of a footnote. The first, third, fifth,
... text child specifies the content for a footnote; the second,
fourth, sixth, ... child specifies the marker. Content tends to end in
a new-line, which the reader may wish to trim; similarly, markers tend
to end in ..
The paragraph, if present, may be ignored, since it is always
empty.
Legacy Properties
The detail XML format has features for styling most of the aspects of a
table. It also inherits defaults for many aspects from structure XML,
which has the following tableProperties element:
tableProperties
:name?
=> generalProperties footnoteProperties cellFormatProperties borderProperties printingProperties
generalProperties
:hideEmptyRows=bool?
:maximumColumnWidth=dimension?
:maximumRowWidth=dimension?
:minimumColumnWidth=dimension?
:minimumRowWidth=dimension?
:rowDimensionLabels=(inCorner | nested)?
=> EMPTY
footnoteProperties
:markerPosition=(superscript | subscript)?
:numberFormat=(alphabetic | numeric)?
=> EMPTY
cellFormatProperties => cell_style+
any[cell_style]
:alternatingColor=color?
:alternatingTextColor=color?
=> style
style
:color=color?
:color2=color?
:font-family?
:font-size?
:font-style=(regular | italic)?
:font-weight=(regular | bold)?
:font-underline=(none | underline)?
:labelLocationVertical=(positive | negative | center)?
:margin-bottom=dimension?
:margin-left=dimension?
:margin-right=dimension?
:margin-top=dimension?
:textAlignment=(left | right | center | decimal | mixed)?
:decimal-offset=dimension?
=> EMPTY
borderProperties => border_style+
any[border_style]
:borderStyleType=(none | solid | dashed | thick | thin | double)?
:color=color?
=> EMPTY
printingProperties
:printAllLayers=bool?
:rescaleLongTableToFitPage=bool?
:rescaleWideTableToFitPage=bool?
:windowOrphanLines=int?
:continuationText?
:continuationTextAtBottom=bool?
:continuationTextAtTop=bool?
:printEachLayerOnSeparatePage=bool?
=> EMPTY
The name attribute appears only in standalone .stt
files.
SPSS TableLook File Formats
SPSS has a concept called a TableLook to control the styling of pivot
tables in output. SPSS 15 and earlier used .tlo files with a
special binary format to save TableLooks to disk; SPSS 16 and later
use .stt files in an XML format to save them. Both formats expose
roughly the same features, although the older .tlo format does have
some features that .stt does not.
This chapter describes both formats.
The .stt Format
The .stt file format is an XML file that contains a subset of the
SPV structure member format. Its root element is a tableProperties
element.
The .tlo Format
A .tlo file has a custom binary format. This section describes it
using the binary format
conventions used for
SPV binary members. There is one new convention: TLO files express
colors as int32 values in which the low 8 bits are the red
component, the next 8 bits are green, and next 8 bits are blue, and
the high bits are zeros.
TLO files support various features that SPV files do not. PSPP implements the SPV feature set, so it mostly ignores the added TLO features. The details of this mapping are explained below.
At the top level, a TLO file consists of five sections. The first four are always present and the last one is optional:
TableLook =>
PTTableLook[tl]
PVSeparatorStyle[ss]
PVCellStyle[cs]
PVTextStyle[ts]
V2Styles?
Each section is described below.
PTTableLook
PTTableLook =>
ff ff 00 00 "PTTableLook" (00|02)[version]
int16[flags]
00 00
bool[nested-row-labels] 00
bool[footnote-marker-subscripts] 00
i54 i18
In PTTableLook, version is 00 or 02. The only difference is
that version 00 lacks V2Styles and that version 02
includes it. Both TLO versions are seen in the wild.
flags is a bit-mapped field. Its bits have the following meanings:
-
0x2: If set to 1, hide empty rows and columns; otherwise, show them.
-
0x4: If set to 1, use numeric footnote markers; otherwise, use alphabetic footnote markers.
-
0x8: If set to 1, print all layers; otherwise, print only the current layer.
-
0x10: If set to 1, scale the table to fit the page width; otherwise, break it horizontally if necessary.
-
0x20: If set to 1, scale the table to fit the page length; otherwise, break it vertically if necessary.
-
0x40: If set to 1, print each layer on a separate page (only if all layers are being printed); otherwise, paginate layers naturally.
-
0x80: If set to 1, print a continuation string at the top of a table that is split between pages.
-
0x100: If set to 1, print a continuation string at the bottom of a table that is split between pages.
When nested-row-labels is 1, row dimension labels appear nested;
otherwise, they are put into the upper-left corner of the pivot table.
When footnote-marker-subscripts is 1, footnote markers are shown as
subscripts; otherwise, they are shown as superscripts.
PVSeparatorStyle
PVSeparatorStyle =>
ff ff 00 00 "PVSeparatorStyle" 00
Separator*4[sep1]
03 80 00
Separator*4[sep2]
Separator =>
case(
00 00
| 01 00 int32[color] int16[style] int16[width]
)[type]
PVSeparatorStyle contains eight Separators, in two groups. Each
Separator represents a border between pivot table elements. TLO and
SPV files have the same concepts for borders. See Light Member
Borders, for the treatment of borders in
SPV files.
A Separator's type is 00 if the border is not drawn, 01 otherwise.
For a border that is drawn, color is the color that it is drawn in.
style and width have the following meanings:
-
style= 0 and 0 ≤width≤ 3
An increasingly thick single line. SPV files only have three line thicknesses. PSPP treatswidth0 as a thin line,width1 as a solid (normal width) line, andwidth2 or 3 as a thick line. -
style= 1 and 0 ≤width≤ 1
A doubled line, composed of normal-width (0) or thick (1) lines. SPV files only have "normal" width double lines, so PSPP maps both variants the same way. -
style= 2
A dashed line.
The first group, sep1, represents the following borders within the
pivot table, by index:
- Horizontal dimension rows
- Vertical dimension rows
- Horizontal category rows
- Vertical category rows
The second group, sep2, represents the following borders within the
pivot table, by index:
- Horizontal dimension columns
- Vertical dimension columns
- Horizontal category columns
- Vertical category columns
PVCellStyle and PVTextStyle
PVCellStyle =>
ff ff 00 00 "PVCellStyle"
AreaColor[title-color]
PVTextStyle =>
ff ff 00 00 "PVTextStyle" 00
AreaStyle[title-style] MostAreas*7[most-areas]
MostAreas =>
06 80
AreaColor[color] 08 80 00 AreaStyle[style]
These sections hold the styling and coloring for each of the 8 areas in a pivot table. They are conceptually similar to the Areas style information in SPV light members.
The styling and coloring for the title area is split between
PVCellStyle and PVTextStyle: the former holds title-color, the
latter holds title-style. The style for the remaining 7 areas is in
most-areas in PVTextStyle, in the following order: layers, corner,
row labels, column labels, data, caption, and footer.
AreaColor =>
00 01 00 int32[color10] int32[color0] byte[shading] 00
AreaColor represents the background color of an area. TLO files, but
not SPV files, describe backgrounds that are a shaded combination of two
colors: shading of 0 is pure color0, shading of 10 is pure
color10, and value in between mix pixels of the two different colors
in linear degree. PSPP does not implement shading, so for 1 ≤ shading
≤ 9 it interpolates RGB values between colors to arrive at an
intermediate shade.
AreaStyle =>
int16[valign] int16[halign] int16[decimal-offset]
int16[left-margin] int16[right-margin] int16[top-margin] int16[bottom-margin]
00 00 01 00
int32[font-size] int16[stretch]
00*2
int32[rotation-angle]
00*4
int16[weight]
00*2
bool[italic] bool[underline] bool[strikethrough]
int32[rtf-charset-number]
byte[x]
byte[font-name-len] byte*[font-name-len][font-name]
int32[text-color]
00*2
AreaStyle represents style properties of an area.
valign has the following values:
valign | Vertical Alignment |
|---|---|
| 0 | Top |
| 1 | Bottom |
| 2 | Center |
halign has the following values:
halign | Horizontal Alignment |
|---|---|
| 0 | Left |
| 1 | Right |
| 2 | Center |
| 3 | Mixed |
| 4 | Decimal |
For decimal alignment, decimal-offset is the offset of the decimal
point, in 20ths of a point.
left-margin, right-margin, top-margin, and bottom-margin are
also measured in 20ths of a point.
font-size is negative 96ths of an inch, e.g. 9 point is -12 or
0xfffffff3.
stretch has something to do with font size or stretch. The usual
value is 01 and values larger than that do weird things. A reader can
safely ignore it.
rotation-angle is a font rotation angle. A reader can safely
ignore it.
weight is 400 for a normal-weight font, 700 indicates bold. (This
is a Windows API convention.)
italic and underline have the obvious meanings. So does
strikethrough, which PSPP ignores.
rtf-charset-number is a character set number from RTF. A reader can
safely ignore it.
The meaning of x is unknown. Values 12, 22, 31, and 32 have been
observed.
The font-name is the name of a font, such as Arial. Only
US-ASCII characters have been observed here.
text-color is the color of the text itself.
V2Styles
V2Styles =>
Separator*11[sep3]
byte[continuation-len] byte*[continuation-len][continuation]
int32[min-col-width] int32[max-col-width]
int32[min-row-height] int32[max-row-height]
This final, optional, part of the TLO file format contains some
additional style information. It begins with sep3, which represents
the following borders within the pivot table, by index:
- 0: Title.
- 1...4: Left, right, top, and bottom inner frame.
- 5...8: Left, right, top, and bottom outer frame.
- 9, 10: Left and top of data area.
When V2Styles is absent, the inner frame borders default to a solid
line and the others listed above to no line.
continuation is the string that goes at the top or bottom of a
table broken across pages. When V2Styles is absent, the default is
(Cont.).
min-col-width is the minimum width that a column will be assigned
automatically. max-col-width is the maximum width that a column
will be assigned to accommodate a long column label. min-row-width
and max-row-width are a similar range for the width of row labels.
All of these measurements are in points. When V2Styles is absent,
the defaults are 36 for min-col-width and min-row-height, 72 for
max-col-width, and 120 for max-row-height.
Encrypted File Wrappers
SPSS 21 and later can package multiple kinds of files inside an encrypted wrapper. The wrapper has a common format, regardless of the kind of the file that it contains.
⚠️ Warning: The SPSS encryption wrapper is poorly designed. When the password is unknown, it is much cheaper and faster to decrypt a file encrypted this way than if a well designed alternative were used. If you must use this format, use a 10-byte randomly generated password.
Common Wrapper Format
An encrypted file wrapper begins with the following 36-byte header,
where xxx identifies the type of file encapsulated: SAV for a system
file, SPS for a syntax file, SPV for a viewer file. PSPP code for
identifying these files just checks for the ENCRYPTED keyword at
offset 8, but the other bytes are also fixed in practice:
0000 1c 00 00 00 00 00 00 00 45 4e 43 52 59 50 54 45 |........ENCRYPTE|
0010 44 xx xx xx 15 00 00 00 00 00 00 00 00 00 00 00 |Dxxx............|
0020 00 00 00 00 |....|
Following the fixed header is essentially the regular contents of the encapsulated file in its usual format, with each 16-byte block encrypted with AES-256 in ECB mode.
To make the plaintext an even multiple of 16 bytes in length, the encryption process appends PKCS #7 padding, as specified in RFC 5652 section 6.3. Padding appends 1 to 16 bytes to the plaintext, in which each byte of padding is the number of padding bytes added. If the plaintext is, for example, 2 bytes short of a multiple of 16, the padding is 2 bytes with value 02; if the plaintext is a multiple of 16 bytes in length, the padding is 16 bytes with value 0x10.
The AES-256 key is derived from a password in the following way:
-
Start from the literal password typed by the user. Truncate it to at most 10 bytes, then append as many null bytes as necessary until there are exactly 32 bytes. Call this
password. -
Let
constantbe the following 73-byte constant:0000 00 00 00 01 35 27 13 cc 53 a7 78 89 87 53 22 11 0010 d6 5b 31 58 dc fe 2e 7e 94 da 2f 00 cc 15 71 80 0020 0a 6c 63 53 00 38 c3 38 ac 22 f3 63 62 0e ce 85 0030 3f b8 07 4c 4e 2b 77 c7 21 f5 1a 80 1d 67 fb e1 0040 e1 83 07 d8 0d 00 00 01 00 -
Compute
CMAC-AES-256(password, constant). Call the 16-byte resultcmac. -
The 32-byte AES-256 key is
cmac || cmac, that is,cmacrepeated twice.
Example
Consider the password pspp. password is:
0000 70 73 70 70 00 00 00 00 00 00 00 00 00 00 00 00 |pspp............|
0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
cmac is:
0000 3e da 09 8e 66 04 d4 fd f9 63 0c 2c a8 6f b0 45
The AES-256 key is:
0000 3e da 09 8e 66 04 d4 fd f9 63 0c 2c a8 6f b0 45
0010 3e da 09 8e 66 04 d4 fd f9 63 0c 2c a8 6f b0 45
Checking Passwords
A program reading an encrypted file may wish to verify that the password it was given is the correct one. One way is to verify that the PKCS #7 padding at the end of the file is well formed. However, any plaintext that ends in byte 01 is well formed PKCS #7, meaning that about 1 in 256 keys will falsely pass this test. This might be acceptable for interactive use, but the false positive rate is too high for a brute-force search of the password space.
A better test requires some knowledge of the file format being wrapped, to obtain a "magic number" for the beginning of the file.
-
The plaintext of system files begins with
$FL2@(#)or$FL3@(#). -
Before encryption, a syntax file is prefixed with a line at the beginning of the form
* Encoding: ENCODING., where ENCODING is the encoding used for the rest of the file, e.g.windows-1252. Thus,* Encodingmay be used as a magic number for system files. -
The plaintext of viewer files begins with
50 4b 03 04 14 00 08(50 4bisPK).
Password Encoding
SPSS also supports what it calls "encrypted passwords."
⚠️ Warning: SPSS "encrypted passwords" are not encrypted. They are encoded with a simple, fixed scheme and can be decoded to the original password using the rules described below.
An encoded password is always a multiple of 2 characters long, and never longer than 20 characters. The characters in an encoded password are always in the graphic ASCII range 33 through 126. Each successive pair of characters in the password encodes a single byte in the plaintext password.
Use the following algorithm to decode a pair of characters:
-
Let
abe the ASCII code of the first character, andbbe the ASCII code of the second character. -
Let
ahbe the most significant 4 bits ofa. Find the line in the table below that hasahon the left side. The right side of the line is a set of possible values for the most significant 4 bits of the decoded byte.2 ⇒ 2367 3 ⇒ 0145 47 ⇒ 89cd 56 ⇒ abef -
Let
bhbe the most significant 4 bits ofb. Find the line in the second table below that hasbhon the left side. The right side of the line is a set of possible values for the most significant 4 bits of the decoded byte. Together with the results of the previous step, only a single possibility is left.2 ⇒ 139b 3 ⇒ 028a 47 ⇒ 46ce 56 ⇒ 57df -
Let
albe the least significant 4 bits ofa. Find the line in the table below that hasalon the left side. The right side of the line is a set of possible values for the least significant 4 bits of the decoded byte.03cf ⇒ 0145 12de ⇒ 2367 478b ⇒ 89cd 569a ⇒ abef -
Let
blbe the least significant 4 bits ofb. Find the line in the table below that hasblon the left side. The right side of the line is a set of possible values for the least significant 4 bits of the decoded byte. Together with the results of the previous step, only a single possibility is left.03cf ⇒ 028a 12de ⇒ 139b 478b ⇒ 46ce 569a ⇒ 57df
Example
Consider the encoded character pair -|. a is 0x2d and b is
0x7c, so ah is 2, bh is 7, al is 0xd, and bl is 0xc. ah
means that the most significant four bits of the decoded character is
2, 3, 6, or 7, and bh means that they are 4, 6, 0xc, or 0xe. The
single possibility in common is 6, so the most significant four bits
are 6. Similarly, al means that the least significant four bits are
2, 3, 6, or 7, and bl means they are 0, 2, 8, or 0xa, so the least
significant four bits are 2. The decoded character is therefore 0x62,
the letter b.
Portable File Format
These days, most computers use the same internal data formats for integer and floating-point data, if one ignores little differences like big- versus little-endian byte ordering. This has not always been true, particularly in the 1960s or 1970s, when the portable file format originated as a way to exchange data between systems with incompatible data formats.
At the time, even bytes being 8 bits each was not a given. For that reason, the portable file format is a text format, because text files could be exchanged portably among systems slightly more freely. On the other hand, character encoding was not standardized, so exchanging data in portable file format required recoding it from the origin system's character encoding to the destination's.
Some contemporary systems represented text files as sequences of fixed-length (typically 80-byte) records, without new-line sequences. These operating systems padded lines shorter lines with spaces and truncated longer lines. To tolerate files copied from such systems, which might drop spaces at the ends of lines, the portable file format treats lines less than 80 bytes long as padded with spaces to that length.
The portable file format self-identifies the character encoding on the
system that produced it at the very beginning, in the
header. Since portable files are normally
recoded when they are transported from one system to another, this
identification can be wrong on its face: a file that was started in
EBCDIC, and is then recoded to ASCII, will still say EBCDIC SPSS PORT FILE at the beginning, just in ASCII instead of EBCDIC.
The portable file header also contains a table of all of the characters that it supports. Readers use this to translate each byte of the file into its local encoding. Like the rest of the portable file, the character table is recoded when the file is moved to a system with a different character set so that it remains correct, or at least consistent with the rest of the file.
The portable file format is mostly obsolete. System files are a better alternative.
- Sources
- Portable File Characters
- Portable File Structure
- Splash Strings
- Translation Table
- Tag String
- Version and Date Info Record
- Identification Records
- Variable Count Record
- Precision Record
- Case Weight Variable Record
- Variable Records
- Value Label Records
- Document Record
- Portable File Data
Sources
The information in this chapter is drawn from documentation and source code, including:
-
pff.tar.Z, a Fortran program from the 1980s that reads and writes portable files. This program contains translation tables from the portable character set to EBCDIC and to ASCII. -
A document, now lost, that describes portable file syntax.
It is further informed by a corpus of about 1,400 portable files. The plausible creation dates in the corpus range from 1986 to 2025, in addition to 131 files with alleged creation dates between 1900 and 1907 and 21 files with an invalid creation date.
Portable File Characters
Portable files are arranged as a series of lines of 80 characters each. Each line is terminated by a carriage-return, line-feed sequence ("new-lines"). New-lines are only used to avoid line length limits imposed by some OSes; they are not meaningful.
Most lines in portable files are exactly 80 characters long. The only exception is a line that ends in one or more spaces, in which the spaces may optionally be omitted. Thus, a portable file reader must act as though a line shorter than 80 characters is padded to that length with spaces.
The file must be terminated with a Z character. In addition, if
the final line in the file does not have exactly 80 characters, then it
is padded on the right with Z characters. (The file contents may be
in any character set; the file contains a description of its own
character set, as explained in the next section. Therefore, the Z
character is not necessarily an ASCII Z.)
For the rest of the description of the portable file format,
new-lines and the trailing Zs will be ignored, as if they did not
exist, because they are not an important part of understanding the file
contents.
Portable File Structure
Every portable file consists of the following records, in sequence:
-
Splash strings.
-
Version and date info.
-
Product identification.
-
Author identification (optional).
-
Subproduct identification (optional).
-
Variable count.
-
Case weight variable (optional).
-
Variables. Each variable record may optionally be followed by a missing value record and a variable label record.
-
Value labels (optional).
-
Documents (optional).
-
Data.
Most records are identified by a single-character tag code. The file header and version info record do not have a tag.
Other than these single-character codes, there are three types of fields in a portable file: floating-point, integer, and string. Floating-point fields have the following format:
-
Zero or more leading spaces.
-
Optional asterisk (
*), which indicates a missing value. The asterisk must be followed by a single character, generally a period (.), but it appears that other characters may also be possible. This completes the specification of a missing value. -
Optional minus sign (
-) to indicate a negative number. -
A whole number, consisting of one or more base-30 digits:
0through9plus capital lettersAthroughT. -
Optional fraction, consisting of a radix point (
.) followed by one or more base-30 digits. -
Optional exponent, consisting of a plus or minus sign (
+or-) followed by one or more base-30 digits. -
A forward slash (
/).
Integer fields take a form identical to floating-point fields, but they may not contain a fraction.
String fields take the form of a integer field having value N, followed by exactly N characters, which are the string content.
Strings longer than 255 bytes exist in the corpus.
Splash Strings
Every portable file begins with 200 bytes of splash strings that serve
to identify the file's type and its original character set. The 200
bytes are divided into five 40-byte sections, each of which is
supposed to represent the string <CHARSET> SPSS PORT FILE in a
different character set encoding1, where <CHARSET> is the name of
the character set used in the file, e.g. ASCII or EBCDIC. Each
string is padded on the right with spaces in its respective character
set.
It appears that these strings exist only to inform those who might
view the file on a screen, letting them know what character set the
file is in regardless of how they are viewing it, and that they are
not parsed by SPSS products. Thus, they can be safely ignored. It is
reasonable to simply write out ASCII SPSS PORT FILE five times, each
time padded to 40 bytes.
Translation Table
The splash strings are followed by a 256-byte character set translation table. This segment describes a mapping from the character set used in the portable file to a "portable character set" that does not correspond to any known single-byte character set or code page. Each byte in the table reports the byte value that corresponds to the character represented by its position. The following section lists the character at each position.
For example, position 0x4a (decimal 74) in the portable character set is uppercase letter A (as shown in the table in the following section), so the 75th byte in the table is the value that represents
Ain the file.
Any real character set will not necessarily include all of the
characters in the portable character set. In the translation table,
omitted characters are written as digit 02.
For example, in practice, all of the control character positions are always written as
0.
The following section describes how the translation table is supposed to act based on looking at the sources, and then the section after that describes what it actually contains in practice.
Theory
The table below shows the portable character set. The columns in the table are:
-
"Pos", a position within the portable character set, in hex, from 00 to FF.
-
"EBCDIC", the translation for the given position to EBCDIC, as written in
pff.tar.Z. -
"ASCII", the translation for the given position to ASCII, as written in
pff.tar.Z. -
"Unicode", a suggestion for the best translation from this position to Unicode.
-
"Notes", which links to additional information for some characters.
In addition to the sources previously cited, some of the
information below is drawn from RFC 183, from 1971. This RFC shows
many of the "EBCDIC" hex codes in pff.tar.Z as corresponding to the
descriptions in the document, even though no known EBCDIC codepage
contains those characters with those codes.
| Pos | EBCDIC | ASCII | Unicode | Notes | |
|---|---|---|---|---|---|
| 00 | 00 | — | — | — | 3 |
| 01 | 01 | — | — | — | 3 |
| 02 | 02 | — | — | — | 3 |
| 03 | 03 | — | — | — | 3 |
| 04 | 04 | — | — | — | 3 |
| 05 | 05 | — | U+0009 CHARACTER TABULATION | — | 3 |
| 06 | 06 | — | — | — | 3 |
| 07 | 07 | — | — | — | 3 |
| 08 | 08 | — | — | — | 3 |
| 09 | 09 | — | — | — | 3 |
| 0A | 0A | — | — | — | 3 |
| 0B | 0B | — | U+000B LINE TABULATION | — | 3 |
| 0C | 0C | — | U+000C FORM FEED | — | 3 |
| 0D | 0D | — | U+000D CARRIAGE RETURN | — | 3 |
| 0E | 0E | — | — | — | 3 |
| 0F | 0F | — | — | — | 3 |
| 10 | 10 | — | — | — | 3 |
| 11 | 11 | — | — | — | 3 |
| 12 | 12 | — | — | — | 3 |
| 13 | 13 | — | — | — | 3 |
| 14 | 3C | — | — | — | 3 |
| 15 | 15 | — | U+000A LINE FEED | — | 3 |
| 16 | 16 | — | U+0008 BACKSPACE | — | 3 |
| 17 | 17 | — | — | — | 3 |
| 18 | 18 | — | — | — | 3 |
| 19 | 19 | — | — | — | 3 |
| 1A | 1A | — | — | — | 3 |
| 1B | 1B | — | — | — | 3 |
| 1C | 1C | — | — | — | 3 |
| 1D | 1D | — | — | — | 3 |
| 1E | 1E | — | — | — | 3 |
| 1F | 2A | — | — | — | 3 |
| 20 | 20 | — | — | — | 3 |
| 21 | 21 | — | — | — | 3 |
| 22 | 22 | — | — | — | 3 |
| 23 | 23 | — | — | — | 3 |
| 24 | 2B | — | — | — | 3 |
| 25 | 25 | — | U+000A LINE FEED | — | 3 |
| 26 | 26 | — | — | — | 3 |
| 27 | 27 | — | — | — | 3 |
| 28 | 1F | — | — | — | 3 |
| 29 | 24 | — | — | — | 3 |
| 2A | 14 | — | — | — | 3 |
| 2B | 2D | — | — | — | 3 |
| 2C | 2E | — | — | — | 3 |
| 2D | 2F | — | U+0007 BELL | — | 3 |
| 2E | 32 | — | — | — | 3 |
| 2F | 33 | — | — | — | 3 |
| 30 | 34 | — | — | — | 3 |
| 31 | 35 | — | — | — | 3 |
| 32 | 36 | — | — | — | 3 |
| 33 | 37 | — | — | — | 3 |
| 34 | 38 | — | — | — | 3 |
| 35 | 39 | — | — | — | 3 |
| 36 | 3A | — | — | — | 3 |
| 37 | 3B | — | — | — | 3 |
| 38 | 3D | — | — | — | 3 |
| 39 | 3F | — | — | — | 3 |
| 3A | 28 | — | — | — | 3 |
| 3B | 29 | — | — | — | 3 |
| 3C | 2C | — | — | — | 3 |
| 3D | — | — | — | — | 4 |
| 3E | — | — | — | — | 4 |
| 3F | — | — | — | — | 4 |
| 40 | F0 | 30 | U+0030 DIGIT ZERO | 0 | |
| ... | |||||
| 49 | F9 | 39 | U+0039 DIGIT NINE | 9 | |
| 4A | C1 | 41 | U+0041 LATIN CAPITAL LETTER A | A | |
| ... | |||||
| 52 | C9 | 49 | U+0049 LATIN CAPITAL LETTER I | I | |
| 53 | D1 | 4A | U+004A LATIN CAPITAL LETTER J | J | |
| ... | |||||
| 5B | D9 | 52 | U+0052 LATIN CAPITAL LETTER R | R | |
| 5C | E2 | 53 | U+0053 LATIN CAPITAL LETTER S | S | |
| ... | |||||
| 63 | E9 | 5A | U+005A LATIN CAPITAL LETTER Z | Z | |
| 64 | 81 | 61 | U+0061 LATIN SMALL LETTER A | a | |
| ... | |||||
| 7D | 89 | 69 | U+0069 LATIN SMALL LETTER I | i | |
| 64 | 91 | 6A | U+006A LATIN SMALL LETTER J | j | |
| ... | |||||
| 7D | 99 | 72 | U+0072 LATIN SMALL LETTER R | r | |
| 64 | A2 | 73 | U+0073 LATIN SMALL LETTER S | s | |
| ... | |||||
| 7D | A9 | 7A | U+007A LATIN SMALL LETTER Z | z | |
| 7E | 40 | 20 | U+0020 SPACE | | |
| 7F | 4B | 2E | U+002E FULL STOP | . | |
| 80 | 4C | 3C | U+003C LESS-THAN SIGN | < | |
| 81 | 4D | 28 | U+0028 LEFT PARENTHESIS | ( | |
| 82 | 4E | 2B | U+002B PLUS SIGN | + | |
| 83 | 59 | — | U+007C VERTICAL LINE | | | 5 |
| 84 | 50 | 26 | U+0026 AMPERSAND | & | |
| 85 | AD | 5B | U+005B LEFT SQUARE BRACKET | [ | |
| 86 | BD | 5D | U+005D RIGHT SQUARE BRACKET | ] | |
| 87 | 5A | 21 | U+0021 EXCLAMATION MARK | ! | |
| 88 | 5B | 24 | U+0024 DOLLAR SIGN | $ | |
| 89 | 5C | 2A | U+002A ASTERISK | * | |
| 8A | 5D | 29 | U+0029 RIGHT PARENTHESIS | ) | |
| 8B | 5E | 3B | U+003B SEMICOLON | ; | |
| 8C | 5F | 5E | U+005E CIRCUMFLEX ACCENT | ^ | |
| 8D | 60 | 2D | U+002D HYPHEN-MINUS | - | |
| 8E | 61 | 2F | U+002F SOLIDUS | / | |
| 8F | 6A | 76 | U+00A6 BROKEN BAR | ¦ | 5 |
| 90 | 6B | 2C | U+002C COMMA | , | |
| 91 | 6C | 25 | U+0025 PERCENT SIGN | % | |
| 92 | 6D | 5F | U+005F LOW LINE | _ | |
| 93 | 6E | 3E | U+003E GREATER-THAN SIGN | > | |
| 94 | 6F | 3F | U+003F QUESTION MARK | ? | |
| 95 | 79 | 60 | U+0060 GRAVE ACCENT | ` | |
| 96 | 7A | 3A | U+003A COLON | : | |
| 97 | 7B | 23 | U+0023 NUMBER SIGN | # | |
| 98 | 7C | 40 | U+0040 COMMERCIAL AT | @ | |
| 99 | 7D | 27 | U+0027 APOSTROPHE | ' | |
| 9A | 7E | 3D | U+003D EQUALS SIGN | = | |
| 9B | 7F | 22 | U+0022 QUOTATION MARK | " | |
| 9C | 8C | — | U+2264 LESS-THAN OR EQUAL TO | ≤ | |
| 9D | 9C | — | U+25A1 WHITE SQUARE | □ | 6 |
| 9E | 9E | — | U+00B1 PLUS-MINUS SIGN | ± | |
| 9F | 9F | — | U+25A0 BLACK SQUARE | ■ | 7 |
| A0 | — | — | U+00B0 DEGREE SIGN | ° | |
| A1 | 8F | — | U+2020 DAGGER | † | |
| A2 | A1 | 7E | U+007E TILDE | ~ | |
| A3 | A0 | — | U+2013 EN DASH | – | |
| A4 | AB | — | U+2514 BOX DRAWINGS LIGHT UP AND RIGHT | └ | 8 |
| A5 | AC | — | U+250C BOX DRAWINGS LIGHT DOWN AND RIGHT | ┌ | 8 |
| A6 | AE | — | U+2265 GREATER-THAN OR EQUAL TO | ≥ | |
| A7 | B0 | — | U+2070 SUPERSCRIPT ZERO | ⁰ | 8 |
| ... | |||||
| B0 | B9 | — | U+2079 SUPERSCRIPT NINE | ⁹ | 8 |
| B1 | BB | — | U+2518 BOX DRAWINGS LIGHT UP AND LEFT | ┘ | 8 |
| B2 | BC | — | U+2510 BOX DRAWINGS LIGHT DOWN AND LEFT | ┐ | 8 |
| B3 | BE | — | U+2260 NOT EQUAL TO | ≠ | |
| B4 | BF | — | U+2014 EM DASH | — | |
| B5 | 8D | — | U+2070 SUPERSCRIPT LEFT PARENTHESIS | ⁽ | |
| B6 | 9D | — | U+207E SUPERSCRIPT RIGHT PARENTHESIS | ⁾ | |
| B7 | BE | — | U+207A SUPERSCRIPT PLUS SIGN | ⁺ | 9 |
| B8 | C0 | 7B | U+007B LEFT CURLY BRACKET | { | |
| B9 | D0 | 7D | U+007D RIGHT CURLY BRACKET | } | |
| BA | E0 | 5C | U+005C REVERSE SOLIDUS | \ | |
| BB | 4A | — | 0+00A2 CENT SIGN | ¢ | |
| BC | AF | — | U+00B7 MIDDLE DOT | · | 10 |
| BD | — | — | — | — | 4 |
| ... | |||||
| FF | — | — | — | — | 4 |
Summary:
| Range | Characters |
|---|---|
| 40...4F | 0123456789ABCDEF |
| 50...5F | GHIJKLMNOPQRSTUV |
| 60...6F | WXYZabcdefghijkl |
| 70...7F | mnopqrstuvwxyz . |
| 80...8F | <(+|&[]!$*);^-/¦ |
| 90...9F | ,%_>?`:#@'="≤□±■ |
| A0...AF | °†~–└┌≥⁰ⁱ⁴⁵⁶⁷⁸ |
| B0...BC | ⁹┘┐≠—⁽⁾⁺{}\¢· |
Practice: Character Set
The previous section described the translation table in theory. This section describes what it contains in the corpus.
Every file in the corpus is encoded in (extended) ASCII, although 31
of them indicate in their splash strings that they were recoded from
EBCDIC. This also means that ASCII 0 indicates an unmapped
character, that is, one not in the character set represented by the
table.
The files are encoded in different ASCII extension. Some appear to be encoded in windows-1252, others in code page 437, others in unidentified character sets. The particular code page in use does not matter to a reader that uses the table for mapping.
-
There are some invariants across the translation tables for every file in the corpus:
-
All control codes (in the range 0 to 63) are unmapped.
One consequence is that strings in the corpus can never contain new-lines. New-lines encoded literally would be problematic anyhow because readers must ignore them.
-
Digits
0to9and lettersAtoZandatozare correctly mapped. -
Punctuation for space as well as
(+&$*);-/,%_?`:@'=\are correctly mapped.
-
-
Characters
<!^>\"~{}are mapped correctly in almost every file in the corpus, with a few outliers. -
Characters
[]are mostly correct with a few problems. -
Position 97 is correctly
#in most files, and wrongly$in some. -
The characters at positions 83
|and 8F¦have lots of issues, stemming from the history described on Wikipedia. In particular, EBCDIC and Unicode have separate characters for|and¦, but ASCII does not.Most of the corpus leaves 83
|unmapped. Most of the rest map it correctly to|. The remainder map it to!.Most of the corpus maps 8F
¦to|. Only a few map it correctly to¦in windows-1252 or (creatively) to║in code page 437. -
Characters at the following positions are almost always wrong. The table shows:
-
"Character", the character and its position in the portable character set.
-
"Unmapped", the number of files in the corpus that leave the character unmapped (that is, set to
0). -
"windows-1252", the number of files that map the character correctly in windows-1252. If there is more than one plausible mapping, or if the mapping doesn't exactly match the preferred Unicode, the entry shows the mapped character.
-
"cp437", the number of files that map the character correctly in code page 437.
In a few cases, a plausible mapping in the "windows-1252" column is an ASCII character. Those aren't separately counted in the "cp437" column, even though ASCII maps the same way in both encodings.
-
"Wrong", the number of files that map the character to nothing that makes sense in a known encoding.
Character Unmapped windows-1252 cp437 Wrong 9C ≤1366 0 10 28 A6 ≥1373 0 10 21 9F ■1373 0 10 21 9E ±1353 15 15 23 A3 –(en dash)1302 as -: 65as ─: 532 B4 —(em dash)1308 as -: 65as ─: 1021 A4 └1367 0 15 22 A5 ┌1367 0 15 22 B1 ┘1367 0 15 22 B2 ┐1367 0 15 22 A8 ¹1286 as ¹: 15; as1: 650 38 A9 ²1286 as ²: 15; as2: 6515 23 AA ³1286 as ³: 15; as3: 650 38 AB ⁴1308 as 4: 650 31 ... ... ... ... ... B0 ⁹1308 as 9: 650 31 B3 ≠1373 0 as ╪: 1021 B6 ⁽1308 0 0 96 B7 ⁾1373 0 0 31 BB ¢1351 16 10 27 BC ·1357 as ·: 16; as×: 1as ∙: 1020 A0 °1382 as °: 15; asº: 15 6 -
-
Characters at the following positions are always unmapped or wrong:
Character Unmapped windows-1252 cp437 Wrong 9D □1373 0 as ╬: 1021 A1 †1364 0 as ┼: 1030 A7 ⁰1373 as Ø: 10 30 B7 ⁺1373 0 0 31 -
Sometimes the reserved characters are mapped (not in any obviously useful way).
Practice: Characters in Use
The previous section reported on the character sets defined in the translation table in the corpus. This section reports on the characters actually found in the corpus.
In practice, characters in the corpus are in ISO-8859-1, with very few exceptions. The exceptions are a handful of files that either use reserved characters from the portable character set, for unclear reasons, or declare surprising encodings for bytes in the normal ASCII range. These exceptions might be file corruption; they do not appear to be useful.
As a result, a portable file reader could reasonably ignore the translation table and simply interpret all portable files as ISO-8859-1 or windows-1252.
There is no visible distinction in practice between portable files in "communication" versus "tape" format. Neither kind contains control characters.
Files in the corpus have a mix of CRLF and LF-only line ends.
Tag String
The translation table is followed by an 8-byte tag string that
consists of the exact characters SPSSPORT in the portable file's
character set. This can be used to verify that the file is indeed a
portable file.
Since every file in the corpus is encoded in (extended) ASCII, this string always appears in ASCII too.
Version and Date Info Record
This record does not have a tag code. It has the following structure:
-
A single character identifying the file format version. It is always
A. -
An 8-character string field giving the file creation date in the format YYYYMMDD.
-
A 6-character string field giving the file creation time in the format HHMMSS.
In the corpus, there is some variation for file creation dates and times by product:
STAT/TRANSFERoften writes dates that are invalid (e.g.20040931) or obviously wrong (e.g.19040823,19000607).
STAT/TRANSFERoften writes the time as all spaces.
IBM SPSS Statistics 19.0(and probably other versions) writesHHasHfor single-digit hours.
SPSS 6.1 for the Power Macintoshwrites invalid dates such as19:11010.
Identification Records
The product identification record has tag code 1. It consists of a
single string field giving the name of the product that wrote the
portable file.
The author identification record has tag code 2. It is optional and
usually omitted. If present, it consists of a single string field
giving the name of the person who caused the portable file to be
written.
The corpus contains a few different kinds of authors:
Organizational names, such as the names of companies or universities or their departments.
Product names, such as
SPSS for HP-UX.Internet host names, such as
icpsr.umich.edu.
The subproduct identification record has tag code 3. It is optional
and usually omitted. If present, it consists of a single string field
giving additional information on the product that wrote the portable
file.
The corpus contains a few different kinds of subproduct:
x86_64-w64-mingw32or another target triple (written by PSPP).A file name for a
.savfile.
SPSS/PC+ Studentware+written bySPSS for MS WINDOWS Release 7.0in 1996.
FILE BUILT VIA IMPORTwritten bySPSS RELEASE 4.1 FOR VAX/VMSin 1998.
SPSS/PC+written bySPSS for MS WINDOWS Release 7.0in 1996.Multiple instances of
SPSS/PC+written bySPSS/PC+ on IBM PC, but with several spaces padding out both product and subproduct fields.
PFF TEST FILEwritten bySPSS-X RELEASE 2.1 FOR IBM VM/CMSin 1986.
Variable Count Record
The variable count record has tag code 4. It consists of a single
integer field giving the number of variables in the file dictionary.
Precision Record
The precision record has tag code 5. It consists of a single integer
field specifying the maximum number of base-30 digits used in data in
the file.
Case Weight Variable Record
The case weight variable record is optional. If it is present, it
indicates the variable used for weighting cases; if it is absent, cases
are unweighted. It has tag code 6. It consists of a single string
field that names the weighting variable.
Variable Records
Each variable record represents a single variable. Variable records
have tag code 7. They have the following structure:
-
Width (integer). This is 0 for a numeric variable. For portability to old versions of SPSS, it should be between 1 and 255 for a string variable.
Portable files in the corpus contain strings as wide as 32000 bytes. None of these was written by SPSS itself, but by a variety of third-party products:
STAT/TRANSFER,inquery export tool (c) inworks GmbH,QDATA Data Entry System for the IBM PC. The creation dates in the files range from 2016 to 2024. -
Name (string). 1-8 characters long. Must be in all capitals.
A few portable files that contain duplicate variable names have been spotted in the wild. PSPP handles these by renaming the duplicates with numeric extensions:
VAR001,VAR002, and so on. -
Print format. This is a set of three integer fields:
-
Format type encoded the same as in system files.
-
Format width. 1-40.
-
Number of decimal places. 1-40.
A few portable files with invalid format types or formats that are not of the appropriate width or decimals for their variables have been spotted in the wild. PSPP assigns a default
ForAformat to a variable with an invalid format. -
-
Write format. Same structure as the print format described above.
Each variable record can optionally be followed by a missing value
record, which has tag code 8. A missing value record has one field,
the missing value itself (a floating-point or string, as appropriate).
Up to three of these missing value records can be used.
There are also records for missing value ranges:
-
Tag code
BforX THRU Yranges. It is followed by two floating-point values representingXandY. -
Tag code
9forLO THRU Yranges, followed by a floating-point number representingY. -
Tag code
AforX THRU HIranges, followed by a floating-point number representingX.
If a missing value range is present, it may be followed by a single missing value record.
In addition, each variable record can optionally be followed by a
variable label record, which has tag code C. A variable label record
has one field, the variable label itself (string).
Value Label Records
Value label records have tag code D. They have the following format:
-
Variable count (integer).
-
List of variables (strings). The variable count specifies the number in the list. Variables are specified by their names. All variables must be of the same type (numeric or string), but string variables do not necessarily have the same width.
-
Label count (integer).
-
List of (value, label) tuples. The label count specifies the number of tuples. Each tuple consists of a value, which is numeric or string as appropriate to the variables, followed by a label (string).
The corpus contains a few portable files that specify duplicate value labels, that is, two different labels for a single value of a single variable. PSPP uses the last value label specified in these cases.
Document Record
One document record may optionally follow the value label record. The
document record consists of tag code E, following by the number of
document lines as an integer, followed by that number of strings, each
of which represents one document line. Document lines must be 80 bytes
long or shorter.
Portable File Data
The data record has tag code F. There is only one tag for all the
data; thus, all the data must follow the dictionary. The data is
terminated by the end-of-file marker Z, which is not valid as the
beginning of a data element.
Data elements are output in the same order as the variable records describing them. String variables are output as string fields, and numeric variables are output as floating-point fields.
-
The strings are supposed to be in EBCDIC, 7-bit ASCII, CDC 6-bit ASCII, 6-bit ASCII, and Honeywell 6-bit ASCII. (It is somewhat astonishing that anyone considered the possibility of 6-bit "ASCII", or that there were at least three incompatible version of it.) ↩
-
Character
0, not NUL or byte zero. ↩ -
From the EBCDIC translation table in
pff.tar.Z. The ASCII translation table leaves all of them undefined. Code points are only listed for common control characters with some modern relevance. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18 ↩19 ↩20 ↩21 ↩22 ↩23 ↩24 ↩25 ↩26 ↩27 ↩28 ↩29 ↩30 ↩31 ↩32 ↩33 ↩34 ↩35 ↩36 ↩37 ↩38 ↩39 ↩40 ↩41 ↩42 ↩43 ↩44 ↩45 ↩46 ↩47 ↩48 ↩49 ↩50 ↩51 ↩52 ↩53 ↩54 ↩55 ↩56 ↩57 ↩58 ↩59 ↩60 ↩61 -
The document describes 83 as "a solid vertical pipe" and 8F as "a broken vertical pipe". Even though the ASCII translation table in
pff.tar.Zleaves position 83 undefined and translates 8F to U+007C VERTICAL LINE, using U+007C VERTICAL LINE and U+00A6 BROKEN BAR, respectively, seem more accurate in a Unicode environment. ↩ ↩2 -
Unicode inferred from document description as "empty box". ↩
-
Unicode inferred from document description as "filled box". ↩
-
These characters are as described in the document. Some of these don't appear in any known EBCDIC code page, but the EBCDIC translations given in
pff.tar.Zmatch the graphics shown in RFC 183 with those hex codes. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 -
Described in document as "horizontal dagger", which doesn't appear in Unicode or any known code page. This interpretation from RFC 183 seems more likely. ↩
-
Unicode inferred from document description as "centered dot, or bullet" ↩
SPSS/PC+ System File Format
SPSS/PC+, first released in 1984, was a simplified version of SPSS for IBM PC and compatible computers. It used a data file format related to the one described in the previous chapter, but simplified and incompatible. The SPSS/PC+ software became obsolete in the 1990s, so files in this format are rarely encountered today. Nevertheless, for completeness, and because it is not very difficult, it seems worthwhile to support at least reading these files. This chapter documents this format, based on examination of a corpus of about 60 files from a variety of sources.
System files use four data types: 8-bit characters, 16-bit unsigned
integers, 32-bit unsigned integers, and 64-bit floating points, called
here char, uint16, uint32, and flt64, respectively. Data is not
necessarily aligned on a word or double-word boundary.
SPSS/PC+ ran only on IBM PC and compatible computers. Therefore, values in these files are always in little-endian byte order. Floating-point numbers are always in IEEE 754 format.
SPSS/PC+ system files represent the system-missing value as
-1.66e308, or f5 1e 26 02 8a 8c ed ff expressed as hexadecimal. (This
is an unusual choice: it is close to, but not equal to, the largest
negative 64-bit IEEE 754, which is about -1.8e308.)
Text in SPSS/PC+ system file is encoded in ASCII-based 8-bit MS DOS codepages. The corpus used for investigating the format were all ASCII-only.
An SPSS/PC+ system file begins with the following 256-byte directory:
uint32 two;
uint32 zero;
struct {
uint32 ofs;
uint32 len;
} records[15];
char filename[128];
-
uint32 two;
uint32 zero;
Always set to 2 and 0, respectively.These fields could be used as a signature for the file format, but the
productfield in record 0 seems more likely to be unique. -
struct { ... } records[15];
Each of the elements in this array identifies a record in the system file. Theofsis a byte offset, from the beginning of the file, that identifies the start of the record.lenspecifies the length of the record, in bytes. Many records are optional or not used. If a record is not present,ofsandlenfor that record are both are zero. -
char filename[128];
In most files in the corpus, this field is entirely filled with spaces or null bytes. In others, it contains a filename, which generally contains doubled backslashes, e.g.c:\\doli\\altm\\f_sum94.sys. The unusual extension(_)is also common, e.g.DER56.(_).
The following sections describe the contents of each record,
identified by the index into the records array.
- Record 0: Main Header Record
- Record 1: Variables Record
- Record 2: Labels Record
- Record 3: Data Record
- Records 4 and 5: Data Entry
Record 0: Main Header Record
All files in the corpus have this record at offset 0x100 with length
0xb0 (but readers should find this record, like the others, via the
records table in the directory). Its format is:
uint16 one0;
char family[2];
char product[60];
flt64 sysmis;
uint32 zero0;
uint32 zero1;
uint16 one1;
uint16 compressed;
uint16 nominal_case_size;
uint16 n_cases0;
uint16 weight_index;
uint16 unknown;
uint16 n_cases1;
uint16 zero2;
char creation_date[8];
char creation_time[8];
char file_label[64];
-
uint16 one0;
uint16 one1;
Always set to 1. -
uint32 zero0;
uint32 zero1;
uint16 zero2;
Always set to 0. -
uint16 unknown;Unknown meaning. Usually set to 0. -
char family[2];
Identifies the product family that created the file. This is eitherPCfor SPSS/PC+ and related software, orDEfor SPSS Data Entry and related software. -
char product[60];
Name of the program that created the file. Only the following unique values have been observed, in each case padded on the right with spaces:SPSS/PC+ System File Written by Data Entry II SPSS SYSTEM FILE. IBM PC DOS, SPSS/PC+ SPSS SYSTEM FILE. IBM PC DOS, SPSS/PC+ V3.0 SPSS SYSTEM FILE. IBM PC DOS, SPSS for WindowsThus, it is reasonable to use the presence of the string
SPSSat offset 0x104 as a simple test for an SPSS/PC+ data file. -
flt64 sysmis;
The system-missing value, as described previously. -
uint16 compressed;
Set to 0 if the data in the file is not compressed, 1 if the data is compressed with simple bytecode compression.The corpus contains a mix of compressed and uncompressed files.
-
uint16 nominal_case_size;
Number of data elements per case. This is the number of variables, except that long string variables add extra data elements (one for every 8 bytes after the first 8). String variables in SPSS/PC+ system files are limited to 255 bytes. -
uint16 n_cases0;
uint16 n_cases1;
The number of cases in the data record. Both values are the same.Readers must use these case counts because some files in the corpus contain garbage that somewhat resembles data after the specified number of cases.
-
uint16 weight_index;
0, if the file is unweighted, otherwise a 1-based index into the data record of the weighting variable, e.g. 4 for the first variable after the 3 system-defined variables. -
char creation_date[8];
The date that the file was created, inmm/dd/yyformat.Single-digit days and months are not prefixed by zeros. The string is padded with spaces on right or left or both, e.g.
_2/4/93_,10/5/87_, and_1/11/88(with_standing in for a space) are all actual examples from the corpus. -
char creation_time[8];
The time that the file was created, inHH:MM:SSformat.Single-digit hours are padded on the left with a space. Minutes and seconds are always written as two digits.
-
char file_label[64];
File label declared by the user, if any. Padded on the right with spaces.
Record 1: Variables Record
The variables record most commonly starts at offset 0x1b0, but it can be placed elsewhere. The record contains instances of the following 32-byte structure:
uint32 value_label_start;
uint32 value_label_end;
uint32 var_label_ofs;
uint32 format;
char name[8];
union {
flt64 f;
char s[8];
} missing;
The number of instances is the nominal_case_size specified in the
main header record. There is one instance for each numeric variable
and each string variable with width 8 bytes or less. String variables
wider than 8 bytes have one instance for each 8 bytes, rounding up.
The first instance for a long string specifies the variable's correct
dictionary information. Subsequent instances for a long string are
generally filled with all-zero bytes, although the missing field
contains the numeric system-missing value, and some writers also fill
in var_label_ofs, format, and name, sometimes filling the latter
with the numeric system-missing value rather than a text string.
Regardless of the values used, readers should ignore the contents of
these additional instances for long strings.
-
uint32 value_label_start;
uint32 value_label_end;
These specify offsets into the label record of the start and end of value labels for this variable. They are zero if there are no value labels. See the labels record, for more information. A long string variable may not have value labels.Sometimes the data is, instead of value labels, some form of data validation rules for SPSS Data Entry. There is no known way to distinguish, except that data validation rules often cannot be interpreted as valid value labels because the label length field makes them not fit exactly in the allocated space.
It appears that SPSS products cannot properly read these either. All the files in the corpus with these problems are closely related, so it's also possible that they are corrupted in some way.
-
uint32 var_label_ofs;
For a variable with a variable label, this specifies an offset into the label record. See the labels record, for more information.For a variable without a variable label, this is zero.
-
uint32 format;
The variable's output format, in the format used in system files. SPSS/PC+ system files only use format types 5 (F, for numeric variables) and 1 (A, for string variables). -
char name[8];
The variable's name, padded on the right with spaces. -
union { ... } missing;
A user-missing value. For numeric variables,missing.fis the variable's user-missing value. For string variables,missing.sis a string missing value. A variable without a user-missing value is indicated withmissing.fset to the system-missing value, even for string variables (!). A long string variable may not have a missing value.
In addition to the user-defined variables, every SPSS/PC+ system file
contains, as its first three variables, the following system-defined
variables, in the following order. The system-defined variables have
no variable label, value labels, or missing values. PSPP renames
these variables to start with @ when it reads an SPSS/PC+ system
file.
-
$CASENUM
A numeric variable with formatF8.0. Most of the time this is a sequence number, starting with 1 for the first case and counting up for each subsequent case. Some files skip over values, which probably reflects cases that were deleted. -
$DATE
A string variable with formatA8. Same format (including varying padding) as thecreation_datefield in the main header record. The actual date can differ fromcreation_dateand from record to record. This may reflect when individual cases were added or updated. -
$WEIGHT
A numeric variable with formatF8.2. This represents the case's weight. If weighting has not been enabled, every case has value 1.0.
Record 2: Labels Record
The labels record holds value labels and variable labels. Unlike the other records, it is not meant to be read directly and sequentially. Instead, this record must be interpreted one piece at a time, by following pointers from the variables record.
The value_label_start, value_label_end, and var_label_ofs
fields in a variable record are all offsets relative to the beginning of
the labels record, with an additional 7-byte offset. That is, if the
labels record starts at byte offset labels_ofs and a variable has a
given var_label_ofs, then the variable label begins at byte offset
labels_ofs + var_label_ofs + 7 in the file.
A variable label, starting at the offset indicated by
var_label_ofs, consists of a one-byte length followed by the specified
number of bytes of the variable label string, like this:
uint8 length;
char s[length];
A set of value labels, extending from value_label_start to
value_label_end (exclusive), consists of a numeric or string value
followed by a string in the format just described. String values are
padded on the right with spaces to fill the 8-byte field, like this:
union {
flt64 f;
char s[8];
} value;
uint8 length;
char s[length];
The labels record begins with a pair of uint32 values. The first of
these is always 3. The second is between 8 and 16 less than the
number of bytes in the record. Neither value is important for
interpreting the file.
Record 3: Data Record
The format of the data record varies depending on the value of
compressed in the file header record:
-
0: no compression
Data is arranged as a series of 8-byte elements, one per variable instance variable in the variable record. Numeric values are given inflt64format; string values are literal characters string, padded on the right with spaces when necessary to fill out 8-byte units. -
1: bytecode compression
The first 8 bytes of the data record is divided into a series of 1-byte command codes. These codes have meanings as described below:-
0
The system-missing value. -
1
A numeric or string value that is not compressible. The value is stored in the 8 bytes following the current block of command bytes. If this value appears twice in a block of command bytes, then it indicates the second group of 8 bytes following the command bytes, and so on. -
2 through 255
A number with valueCODE - 100, whereCODEis the value of the compression code. For example, code 105 indicates a numeric variable of value 5.
The end of the 8-byte group of command codes is followed by any 8-byte blocks of non-compressible values indicated by code 1. After that follows another 8-byte group of command codes, then those command codes' non-compressible values. The pattern repeats up to the number of cases specified by the main header record have been seen.
The corpus does not contain any files with command codes 2 through 95, so it is possible that some of these codes are used for special purposes.
-
Cases of data often, but not always, fill the entire data record. Readers should stop reading after the number of cases specified in the main header record. Otherwise, readers may try to interpret garbage following the data as additional cases.
Records 4 and 5: Data Entry
Records 4 and 5 appear to be related to SPSS/PC+ Data Entry.