File Input¶
Overview¶
The file_input
node allows you to read the content of files and
publish their content in your job. Here is an example :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | job: [ { type: file_input component: input settings: { // Supported format are: // text, csv, json, parquet, orc, jdbc, libsvm format: csv // the name of the file specified in the spark.files parameter below. // This node is mostly useful to develop simple pml jobs. file_name: AAPL.csv } publish: [ { stream: data } ] } ], spark_settings: { // Location of the input file. That path must be reachable // from where the spark runs. I.e. every spark node. // You can also use relative path like './AAPL.csv' as long // as you launch your pml in foreground mode from the same directory. spark.files: /tmp/AAPL.csv } |
Options¶
Each format supports dedicated options that you can set in an additional
options
dictionary.
Important
options value are all of type String. Make sure you use "true" instead of true.
Text¶
You can set the following text-specific option(s) for reading text files:
wholetext
(defaultfalse
): If true, read a file as a single row and not split by "\n".lineSep
(default covers all\r
,\r\n
and\n
): defines the line separator that should be used for parsing.
Json¶
You can set the following JSON-specific options to deal with non-standard JSON files:
primitivesAsString
(defaultfalse
): infers all primitive values as a string typeprefersDecimal
(defaultfalse
): infers all floating-point values as a decimal type. If the values do not fit in decimal, then it infers them as doubles.allowComments
(defaultfalse
): ignores Java/C++ style comment in JSON recordsallowUnquotedFieldNames
(defaultfalse
): allows unquoted JSON field namesallowSingleQuotes
(defaulttrue
): allows single quotes in addition to double quotesallowNumericLeadingZeros
(defaultfalse
): allows leading zeros in numbers (e.g. 00012)allowBackslashEscapingAnyCharacter
(defaultfalse
): allows accepting quoting of all character using backslash quoting mechanismallowUnquotedControlChars
(defaultfalse
): allows JSON Strings to contain unquoted control characters (ASCII characters with value less than 32, including tab and line feed characters) or not.mode
(defaultPERMISSIVE
): allows a mode for dealing with corrupt records during parsing.PERMISSIVE
: when it meets a corrupted record, puts the malformed string into a field configured bycolumnNameOfCorruptRecord
, and sets other fields tonull
. To keep corrupt records, an user can set a string type field namedcolumnNameOfCorruptRecord
in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. When inferring a schema, it implicitly adds acolumnNameOfCorruptRecord
field in an output schema.DROPMALFORMED
: ignores the whole corrupted records.FAILFAST
: throws an exception when it meets corrupted records.
columnNameOfCorruptRecord
(default is the value specified inspark.sql.columnNameOfCorruptRecord
): allows renaming the new field having malformed string created byPERMISSIVE
mode. This overridesspark.sql.columnNameOfCorruptRecord
.dateFormat
(defaultyyyy-MM-dd
): sets the string that indicates a date format. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to date type.timestampFormat
(defaultyyyy-MM-dd'T'HH:mm:ss.SSSXXX
): sets the string that indicates a timestamp format. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to timestamp type.multiLine
(defaultfalse
): parse one record, which may span multiple lines, per fileencoding
(by default it is not set): allows to forcibly set one of standard basic or extended encoding for the JSON files. For example UTF-16BE, UTF-32LE. If the encoding is not specified andmultiLine
is set totrue
, it will be detected automatically.lineSep
(default covers all\r
,\r\n
and\n
): defines the line separator that should be used for parsing.samplingRatio
(default is 1.0): defines fraction of input JSON objects used for schema inferring.dropFieldIfAllNull
(defaultfalse
): whether to ignore column of all null values or empty array/struct during schema inference.
Csv¶
Note that the file node will go through the input once to determine the input schema if inferSchema
is enabled. To avoid going through the entire data once, disable inferSchema
option or
specify the schema explicitly using schema
.
You can set the following CSV-specific options to deal with CSV files:
sep
(default,
): sets a single character as a separator for each field and value.encoding
(defaultUTF-8
): decodes the CSV files by the given encoding type.quote
(default"
): sets a single character used for escaping quoted values where the separator can be part of the value. If you would like to turn off quotations, you need to set notnull
but an empty string. This behaviour is different fromcom.databricks.spark.csv
.escape
(default\
): sets a single character used for escaping quotes inside an already quoted value.charToEscapeQuoteEscaping
(defaultescape
or\0
): sets a single character used for escaping the escape for the quote character. The default value is escape character when escape and quote characters are different,\0
otherwise.comment
(default empty string): sets a single character used for skipping lines beginning with this character. By default, it is disabled.header
(defaultfalse
): uses the first line as names of columns.enforceSchema
(defaulttrue
): If it is set totrue
, the specified or inferred schema will be forcibly applied to datasource files, and headers in CSV files will be ignored. If the option is set tofalse
, the schema will be validated against all headers in CSV files in the case when theheader
option is set totrue
. Field names in the schema and column names in CSV headers are checked by their positions taking into accountspark.sql.caseSensitive
. Though the default value is true, it is recommended to disable theenforceSchema
option to avoid incorrect results.inferSchema
(defaultfalse
): infers the input schema automatically from data. It requires one extra pass over the data.samplingRatio
(default is 1.0): defines fraction of rows used for schema inferring.ignoreLeadingWhiteSpace
(defaultfalse
): a flag indicating whether or not leading whitespaces from values being read should be skipped.ignoreTrailingWhiteSpace
(defaultfalse
): a flag indicating whether or not trailing whitespaces from values being read should be skipped.nullValue
(default empty string): sets the string representation of a null value. Since 2.0.1, this applies to all supported types including the string type.emptyValue
(default empty string): sets the string representation of an empty value.nanValue
(defaultNaN
): sets the string representation of a non-number" value.positiveInf
(defaultInf
): sets the string representation of a positive infinity value.negativeInf
(default-Inf
): sets the string representation of a negative infinity value.dateFormat
(defaultyyyy-MM-dd
): sets the string that indicates a date format. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to date type.timestampFormat
(defaultyyyy-MM-dd'T'HH:mm:ss.SSSXXX
): sets the string that indicates a timestamp format. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to timestamp type.maxColumns
(default20480
): defines a hard limit of how many columns a record can have.maxCharsPerColumn
(default-1
): defines the maximum number of characters allowed for any given value being read. By default, it is -1 meaning unlimited lengthmode
(defaultPERMISSIVE
): allows a mode for dealing with corrupt records during parsing. It supports the following case-insensitive modes.PERMISSIVE
: when it meets a corrupted record, puts the malformed string into a field configured bycolumnNameOfCorruptRecord
, and sets other fields tonull
. To keep corrupt records, an user can set a string type field namedcolumnNameOfCorruptRecord
in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. A record with less/more tokens than schema is not a corrupted record to CSV. When it meets a record having fewer tokens than the length of the schema, setsnull
to extra fields. When the record has more tokens than the length of the schema, it drops extra tokens.DROPMALFORMED
: ignores the whole corrupted records.FAILFAST
: throws an exception when it meets corrupted records.
columnNameOfCorruptRecord
(default is the value specified inspark.sql.columnNameOfCorruptRecord
): allows renaming the new field having malformed string created byPERMISSIVE
mode. This overridesspark.sql.columnNameOfCorruptRecord
.multiLine
(defaultfalse
): parse one record, which may span multiple lines.
Parquet¶
You can set the following Parquet-specific option(s) for reading Parquet files:
mergeSchema
(default is the value specified inspark.sql.parquet.mergeSchema
): sets whether we should merge schemas collected from all Parquet part-files. This will overridespark.sql.parquet.mergeSchema
.
libsvm¶
It is very common in practice to have sparse training data. MLlib supports reading training examples stored in LIBSVM format, which is the default format used by LIBSVM and LIBLINEAR. It is a text format in which each line represents a labeled sparse feature vector using the following format:
1 | label index1:value1 index2:value2 ... |
where the indices are one-based and in ascending order. After loading, the feature indices are converted to zero-based.