Skip to content

File Input

Before you start...

Before using...

With the file_input node, read file content located on your file system or distributed file system like hdfs and output it as a spark dataset within your Punchline pipeline:

Pyspark ->

Spark ->

Examples

Use-cases

Our "hello world" punchline configuration.

beginner_use_case.punchline

{
  type: punchline
  version: "6.0"
  runtime: spark
  tenant: default
  dag: [
    {
      type: file_input
      component: input
      settings: {
        // Supported format are:
        // text, csv, json, parquet, orc, jdbc, libsvm
        format: csv
        // the name of the file specified in the spark.files parameter below.
        // This node is mostly useful to develop simple pml jobs.
        file_name: AAPL.csv
        options: {
          inferSchema: true
        }
      }
      publish: [
        {
          stream: data
        }
      ]
    }
  ]
  settings: {
    // Location of the input file. That path must be reachable
    // from where the spark runs. I.e. every spark node.
    // You can also use relative path like './AAPL.csv' as long
    // as you launch your pml in foreground mode from the same directory.
    spark.files: /tmp/AAPL.csv
  }
}

run beginner_use_case.punchline by using the command below:

CONF=beginner_use_case.punchline
punchlinectl start -p $CONF

Comming soon

Comming soon

Parameters

Common Settings

Name Type mandatory Default value Description
file_name String true NONE Name of the file specified within spark.files parameter.
format String true NONE Codec that should be used to read the file content [json, csv, parquet, orc]
options str(K)-Str(V) false NONE you can add standard spark reader options. See below for a list of common options. Refer to each spark source reader documentation for a complete list and description. Check the list of supported options from the spark reader documentation.

Advanced Settings

CSV options

Expand me
  • sep (default ,): sets a single character as a separator for each field and value.
  • encoding (default UTF-8): decodes the CSV files by the given encoding type.
  • quote (default "): sets a single character used for escaping quoted values where the separator can be part of the value. If you would like to turn off quotations, you need to set not null but an empty string. This behaviour is different from com.databricks.spark.csv.
  • escape (default ): sets a single character used for escaping quotes inside an already quoted value.
  • charToEscapeQuoteEscaping (default escape or \0): sets a single character used for escaping the escape for the quote character. The default value is escape character when escape and quote characters are different, \0 otherwise.
  • comment (default empty string): sets a single character used for skipping lines beginning with this character. By default, it is disabled.
  • header (default false): uses the first line as names of columns.
  • enforceSchema (default true): If it is set to true, the specified or inferred schema will be forcibly applied to datasource files, and headers in CSV files will be ignored. If the option is set to false, the schema will be validated against all headers in CSV files in the case when the header option is set to true. Field names in the schema and column names in CSV headers are checked by their positions taking into account spark.sql.caseSensitive. Though the default value is true, it is recommended to disable the enforceSchema option to avoid incorrect results.
  • inferSchema (default false): infers the input schema automatically from data. It requires one extra pass over the data.
  • samplingRatio (default is 1.0): defines fraction of rows used for schema inferring.
  • ignoreLeadingWhiteSpace (default false): a flag indicating whether or not leading whitespaces from values being read should be skipped.
  • ignoreTrailingWhiteSpace (default false): a flag indicating whether or not trailing whitespaces from values being read should be skipped. nullValue (default empty string): sets the string representation of a null value. Since 2.0.1, this applies to all supported types including the string type. emptyValue (default empty string): sets the string representation of an empty value.
  • nanValue (default NaN): sets the string representation of a non-number" value.
  • positiveInf (default Inf): sets the string representation of a positive infinity value.
  • negativeInf (default -Inf): sets the string representation of a negative infinity value.
  • dateFormat (default yyyy-MM-dd): sets the string that indicates a date format. Custom date formats follow the formats at java.text.SimpleDateFormat. This applies to date type.
  • timestampFormat (default yyyy-MM-dd'T'HH:mm:ss.SSSXXX): sets the string that indicates a timestamp format. Custom date formats follow the formats at java.text.SimpleDateFormat. This applies to timestamp type.
  • maxColumns (default 20480): defines a hard limit of how many columns a record can have.
  • maxCharsPerColumn (default -1): defines the maximum number of characters allowed for any given value being read. By default, it is -1 meaning unlimited length
  • mode (default PERMISSIVE): allows a mode for dealing with corrupt records during parsing. It supports the following case-insensitive modes. Note that Spark tries to parse only required columns in CSV under column pruning. Therefore, corrupt records can be different based on required set of fields. This behavior can be controlled by spark.sql.csv.parser.columnPruning.enabled (enabled by default).
  • PERMISSIVE : when it meets a corrupted record, puts the malformed string into a field configured by columnNameOfCorruptRecord, and sets other fields to null. To keep corrupt records, an user can set a string type field named columnNameOfCorruptRecord in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. A record with less/more tokens than schema is not a corrupted record to CSV. When it meets a record having fewer tokens than the length of the schema, sets null to extra fields. When the record has more tokens than the length of the schema, it drops extra tokens.
  • DROPMALFORMED : ignores the whole corrupted records.
  • FAILFAST : throws an exception when it meets corrupted records.
  • columnNameOfCorruptRecord (default is the value specified in spark.sql.columnNameOfCorruptRecord): allows renaming the new field having malformed string created by PERMISSIVE mode. This overrides spark.sql.columnNameOfCorruptRecord. multiLine (default false): parse one record, which may span multiple lines

JSON options

Expand me
  • primitivesAsString (default false): infers all primitive values as a string type
  • prefersDecimal (default false): infers all floating-point values as a decimal type. If the values do not fit in decimal, then it infers them as doubles.
  • allowComments (default false): ignores Java/C++ style comment in JSON records
  • allowUnquotedFieldNames (default false): allows unquoted JSON field names
  • allowSingleQuotes (default true): allows single quotes in addition to double quotes
  • allowNumericLeadingZeros (default false): allows leading zeros in numbers (e.g. 00012)
  • allowBackslashEscapingAnyCharacter (default false): allows accepting quoting of all character using backslash quoting mechanism
  • allowUnquotedControlChars (default false): allows JSON Strings to contain unquoted control characters (ASCII characters with value less than 32, including tab and line feed characters) or not.
  • mode (default PERMISSIVE): allows a mode for dealing with corrupt records during parsing.
  • PERMISSIVE : when it meets a corrupted record, puts the malformed string into a field configured by columnNameOfCorruptRecord, and sets other fields to null. To keep corrupt records, an user can set a string type field named columnNameOfCorruptRecord in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. When inferring a schema, it implicitly adds a columnNameOfCorruptRecord field in an output schema.
  • DROPMALFORMED : ignores the whole corrupted records.
  • FAILFAST : throws an exception when it meets corrupted records.
  • columnNameOfCorruptRecord (default is the value specified in spark.sql.columnNameOfCorruptRecord): allows renaming the new field having malformed string created by PERMISSIVE mode. This overrides spark.sql.columnNameOfCorruptRecord.
  • dateFormat (default yyyy-MM-dd): sets the string that indicates a date format. Custom date formats follow the formats at java.text.SimpleDateFormat. This applies to date type.
  • timestampFormat (default yyyy-MM-dd'T'HH:mm:ss.SSSXXX): sets the string that indicates a timestamp format. Custom date formats follow the formats at java.text.SimpleDateFormat. This applies to timestamp type.
  • multiLine (default false): parse one record, which may span multiple lines, per file
  • encoding (by default it is not set): allows to forcibly set one of standard basic or extended encoding for the JSON files. For example UTF-16BE, UTF-32LE. If the - encoding is not specified and multiLine is set to true, it will be detected automatically.
  • lineSep (default covers all \r, \r\n and \n): defines the line separator that should be used for parsing.
  • samplingRatio (default is 1.0): defines fraction of input JSON objects used for schema inferring.
  • dropFieldIfAllNull (default false): whether to ignore column of all null values or empty array/struct during schema inference.