Load and save DataFrames#

We do not cover all features of the packages. Please refer to their documentation to learn them.

Here we’ll load CSV.jl to read and write CSV files and Arrow.jl, JLSO.jl, and serialization, which allow us to work with a binary format and JSONTables.jl for JSON interaction. Finally we consider a custom JDF.jl format.

using DataFrames
using Arrow
using CSV
using JSONTables
using CodecZlib
using ZipFile
using StatsPlots ## for charts
using Mmap ## for compression

Let’s create a simple DataFrame for testing purposes,

x = DataFrame(
    A=[true, false, true], B=[1, 2, missing],
    C=[missing, "b", "c"], D=['a', missing, 'c']
)
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc

and use eltypes to look at the columnwise types.

eltype.(eachcol(x))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

CSV.jl#

Let’s use CSV to save x to disk; make sure x1.csv does not conflict with some file in your working directory.

CSV.write("x1.csv", x)
"x1.csv"

Now we can see how it was saved by reading x.csv.

print(read("x1.csv", String))
A,B,C,D
true,1,,a
false,2,b,
true,,c,c

We can also load it back as a data frame

y = CSV.read("x1.csv", DataFrame)
3×4 DataFrame
RowABCD
BoolInt64?String1?String1?
1true1missinga
2false2bmissing
3truemissingcc

Note that when loading in a DataFrame from a CSV the column type for columns :C :D have changed to use special strings defined in the InlineStrings.jl package.

eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, InlineStrings.String1}
 Union{Missing, InlineStrings.String1}

Clean the generated file

rm("x1.csv")

JSONTables.jl#

Often you might need to read and write data stored in JSON format. JSONTables.jl provides a way to process them in row-oriented or column-oriented layout. We present both options below.

open(io -> arraytable(io, x), "x1.json", "w")
106
open(io -> objecttable(io, x), "x2.json", "w")
76
print(read("x1.json", String))
[{"A":true,"B":1,"C":null,"D":"a"},{"A":false,"B":2,"C":"b","D":null},{"A":true,"B":null,"C":"c","D":"c"}]
print(read("x2.json", String))
{"A":[true,false,true],"B":[1,2,null],"C":[null,"b","c"],"D":["a",null,"c"]}
y1 = open(jsontable, "x1.json") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?String?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y1))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, String}
y2 = open(jsontable, "x2.json") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?String?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y2))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, String}

Clean the generated files

rm("x1.json")
rm("x2.json")

Arrow.jl#

Finally we use Apache Arrow format that allows, in particular, for data interchange with R or Python.

Arrow.write("x.arrow", x)
"x.arrow"
y = Arrow.Table("x.arrow") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

Note that columns of y are immutable

try
    y.A[1] = false
catch e
    show(e)
end
ReadOnlyMemoryError()

This is because Arrow.Table uses memory mapping and thus uses a custom vector types:

y.A
3-element Arrow.BoolVector{Bool}:
 1
 0
 1
y.B
3-element Arrow.Primitive{Union{Missing, Int64}, Vector{Int64}}:
 1
 2
  missing

You can get standard Julia Base vectors by copying a data frame

y2 = copy(y)
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
y2.A
3-element Vector{Bool}:
 1
 0
 1
y2.B
3-element Vector{Union{Missing, Int64}}:
 1
 2
  missing

Clean the generated file

rm("x.arrow")

Basic benchmarking#

Next, we’ll create some files, so be careful that you don’t already have these files in your working directory! In particular, we’ll time how long it takes us to write a DataFrame with 1000 rows and 100000 columns.

bigdf = DataFrame(rand(Bool, 10^4, 1000), :auto)

bigdf[!, 1] = Int.(bigdf[!, 1])
bigdf[!, 2] = bigdf[!, 2] .+ 0.5
bigdf[!, 3] = string.(bigdf[!, 3], ", as string")

println("First run")
First run
println("CSV.jl")
csvwrite1 = @elapsed @time CSV.write("bigdf1.csv.gz", bigdf; compress=true)
println("Arrow.jl")
arrowwrite1 = @elapsed @time Arrow.write("bigdf.arrow", bigdf)
println("JSONTables.jl arraytable")
jsontablesawrite1 = @elapsed @time open(io -> arraytable(io, bigdf), "bigdf1.json", "w")
println("JSONTables.jl objecttable")
jsontablesowrite1 = @elapsed @time open(io -> objecttable(io, bigdf), "bigdf2.json", "w")
println("Second run")
println("CSV.jl")
csvwrite2 = @elapsed @time CSV.write("bigdf1.csv.gz", bigdf; compress=true)
println("Arrow.jl")
arrowwrite2 = @elapsed @time Arrow.write("bigdf.arrow", bigdf)
println("JSONTables.jl arraytable")
jsontablesawrite2 = @elapsed @time open(io -> arraytable(io, bigdf), "bigdf1.json", "w")
println("JSONTables.jl objecttable")
jsontablesowrite2 = @elapsed @time open(io -> objecttable(io, bigdf), "bigdf2.json", "w")
CSV.jl
  6.002204 seconds (45.04 M allocations: 1.590 GiB, 6.94% gc time, 58.26% compilation time: <1% of which was recompilation)
Arrow.jl
  3.963022 seconds (6.64 M allocations: 325.486 MiB, 0.48% gc time, 97.57% compilation time)
JSONTables.jl arraytable
 10.926450 seconds (229.63 M allocations: 5.497 GiB, 16.31% gc time, 0.13% compilation time: <1% of which was recompilation)
JSONTables.jl objecttable
  0.300350 seconds (106.19 k allocations: 309.453 MiB, 6.04% gc time, 26.38% compilation time)
Second run
CSV.jl
  2.282952 seconds (44.41 M allocations: 1.560 GiB, 5.57% gc time)
Arrow.jl
  0.096491 seconds (80.86 k allocations: 5.164 MiB)
JSONTables.jl arraytable
 10.895230 seconds (229.63 M allocations: 5.497 GiB, 15.29% gc time, 0.08% compilation time)
JSONTables.jl objecttable
  0.239073 seconds (20.83 k allocations: 305.241 MiB, 6.80% gc time, 3.79% compilation time)
0.239252714
groupedbar(
    repeat(["CSV.jl (gz)", "Arrow.jl", "JSONTables.jl\nobjecttable"],
        inner=2),
    [csvwrite1, csvwrite2, arrowwrite1, arrowwrite2, jsontablesowrite1, jsontablesowrite2],
    group=repeat(["1st", "2nd"], outer=3),
    ylab="Second",
    title="Write Performance\nDataFrame: bigdf\nSize: $(size(bigdf))",
    permute = (:x, :y)
)
_images/167f304ba0c3604971678bd58466752c7c8eded471f3f0e53ceba84c086c6a07.png
data_files = ["bigdf1.csv.gz", "bigdf.arrow", "bigdf1.json", "bigdf2.json"]
df = DataFrame(file=data_files, size=getfield.(stat.(data_files), :size))
sort!(df, :size)
4×2 DataFrame
Rowfilesize
StringInt64
1bigdf.arrow1742786
2bigdf1.csv.gz2470468
3bigdf2.json55086823
4bigdf1.json124027930
@df df plot(:file, :size / 1024^2, seriestype=:bar, title="Format File Size (MB)", label="Size", ylab="MB")
_images/45a042333acfa2528db94fcc9f4c47e36462b2e88bf0fe9bf797e29883f118bf.png
println("First run")
println("CSV.jl")
csvread1 = @elapsed @time CSV.read("bigdf1.csv.gz", DataFrame)
println("Arrow.jl")
arrowread1 = @elapsed @time df_tmp = Arrow.Table("bigdf.arrow") |> DataFrame
arrowread1copy = @elapsed @time copy(df_tmp)
println("JSONTables.jl arraytable")
jsontablesaread1 = @elapsed @time open(jsontable, "bigdf1.json")
println("JSONTables.jl objecttable")
jsontablesoread1 = @elapsed @time open(jsontable, "bigdf2.json")
println("Second run")
csvread2 = @elapsed @time CSV.read("bigdf1.csv.gz", DataFrame)
println("Arrow.jl")
arrowread2 = @elapsed @time df_tmp = Arrow.Table("bigdf.arrow") |> DataFrame
arrowread2copy = @elapsed @time copy(df_tmp)
println("JSONTables.jl arraytable")
jsontablesaread2 = @elapsed @time open(jsontable, "bigdf1.json")
println("JSONTables.jl objecttable")
jsontablesoread2 = @elapsed @time open(jsontable, "bigdf2.json");
First run
CSV.jl
  2.817632 seconds (4.41 M allocations: 223.331 MiB, 1.90% gc time, 2 lock conflicts, 86.24% compilation time)
Arrow.jl
  0.468273 seconds (573.48 k allocations: 26.968 MiB, 98.38% compilation time)
  0.058121 seconds (14.02 k allocations: 10.297 MiB)
JSONTables.jl arraytable
  5.402500 seconds (271.15 k allocations: 1.772 GiB, 10.37% gc time, 0.07% compilation time)
JSONTables.jl objecttable
  0.410801 seconds (7.39 k allocations: 566.940 MiB, 4.44% gc time, 0.02% compilation time)
Second run
  0.926750 seconds (637.12 k allocations: 43.582 MiB)
Arrow.jl
  0.009412 seconds (84.09 k allocations: 3.594 MiB)
  0.058019 seconds (14.02 k allocations: 10.297 MiB)
JSONTables.jl arraytable
  5.375577 seconds (271.10 k allocations: 1.772 GiB, 10.63% gc time)
JSONTables.jl objecttable
  0.387121 seconds (7.08 k allocations: 566.921 MiB, 2.86% gc time)

Exclude JSONTables due to much longer timing

groupedbar(
    repeat(["CSV.jl (gz)", "Arrow.jl", "Arrow.jl\ncopy", ##"JSON\narraytable",
            "JSON\nobjecttable"], inner=2),
    [csvread1, csvread2, arrowread1, arrowread2, arrowread1 + arrowread1copy, arrowread2 + arrowread2copy,
        # jsontablesaread1, jsontablesaread2,
        jsontablesoread1, jsontablesoread2],
    group=repeat(["1st", "2nd"], outer=4),
    ylab="Second",
    title="Read Performance\nDataFrame: bigdf\nSize: $(size(bigdf))",
    permute = (:x, :y)
)
_images/b954e8dfc43b1eb4bf0d4cc84f7b62a562c213de6ed3015fc4128678ea1cd143.png

Clean generated files

rm("bigdf1.csv.gz")
rm("bigdf1.json")
rm("bigdf2.json")
rm("bigdf.arrow")

Using gzip compression#

A common user requirement is to be able to load and save CSV that are compressed using gzip. Below we show how this can be accomplished using CodecZlib.jl. Again make sure that you do not have file named df_compress_test.csv.gz in your working directory. We first generate a random data frame.

df = DataFrame(rand(1:10, 10, 1000), :auto)
10×1000 DataFrame
900 columns omitted
Rowx1x2x3x4x5x6x7x8x9x10x11x12x13x14x15x16x17x18x19x20x21x22x23x24x25x26x27x28x29x30x31x32x33x34x35x36x37x38x39x40x41x42x43x44x45x46x47x48x49x50x51x52x53x54x55x56x57x58x59x60x61x62x63x64x65x66x67x68x69x70x71x72x73x74x75x76x77x78x79x80x81x82x83x84x85x86x87x88x89x90x91x92x93x94x95x96x97x98x99x100
Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64
15210186734566814716247269910411655103582668436961919931039497175838321010175747110343482128102723246161425103
2449393285110458710610776442627274382612659671072101104633537556491109937875715671668197291558991483788339310
31065179157108254657695691549362831891228555781033392108717469239174923679527344253271625191932810544109633
41065647610751069859291349357335341108710113710626632963108755511010710107741492773826234561091410591058878951898110
5610621718847331057499665103211533726261017494549248551010134108104794109591026967833152344442185943108410810106339
65294257253219334961015366105798989744459824518810114119528874389498489396385651035742110610739751993137244
755463545103212222442966291054193494106331184344433161010421849428983512106521286716164103910941557714310106944
878953332848942135610427108992451023771555212691010444210429213616361084810125397210519782198436152636348681089
92318444935616102910658673210223498951079722101227710184713552291210963747396213136991679458727167289103581033
1056492366684955467583529391239410442362554481910875212865791514896132865694110824995389933382473101649144

Use CodecZlib to compress the CSV file

CSV.write("df_compress_test.csv.gz", df; compress=true)
"df_compress_test.csv.gz"
df2 = CSV.File("df_compress_test.csv.gz") |> DataFrame
10×1000 DataFrame
900 columns omitted
Rowx1x2x3x4x5x6x7x8x9x10x11x12x13x14x15x16x17x18x19x20x21x22x23x24x25x26x27x28x29x30x31x32x33x34x35x36x37x38x39x40x41x42x43x44x45x46x47x48x49x50x51x52x53x54x55x56x57x58x59x60x61x62x63x64x65x66x67x68x69x70x71x72x73x74x75x76x77x78x79x80x81x82x83x84x85x86x87x88x89x90x91x92x93x94x95x96x97x98x99x100
Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64
15210186734566814716247269910411655103582668436961919931039497175838321010175747110343482128102723246161425103
2449393285110458710610776442627274382612659671072101104633537556491109937875715671668197291558991483788339310
31065179157108254657695691549362831891228555781033392108717469239174923679527344253271625191932810544109633
41065647610751069859291349357335341108710113710626632963108755511010710107741492773826234561091410591058878951898110
5610621718847331057499665103211533726261017494549248551010134108104794109591026967833152344442185943108410810106339
65294257253219334961015366105798989744459824518810114119528874389498489396385651035742110610739751993137244
755463545103212222442966291054193494106331184344433161010421849428983512106521286716164103910941557714310106944
878953332848942135610427108992451023771555212691010444210429213616361084810125397210519782198436152636348681089
92318444935616102910658673210223498951079722101227710184713552291210963747396213136991679458727167289103581033
1056492366684955467583529391239410442362554481910875212865791514896132865694110824995389933382473101649144
df == df2
true

Remove generated files

rm("df_compress_test.csv.gz")

Using zip files#

Sometimes you may have files compressed inside a zip file. In such a situation you may use ZipFile.jl in conjunction an an appropriate reader to read the files. Here we first create a ZIP file and then read back its contents into a DataFrame.

df1 = DataFrame(rand(1:10, 3, 4), :auto)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
15615
252106
32379
df2 = DataFrame(rand(1:10, 3, 4), :auto)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
19876
23233
310548

And we show yet another way to write a DataFrame into a CSV file: Writing a CSV file into the zip file

w = ZipFile.Writer("x.zip")

f1 = ZipFile.addfile(w, "x1.csv")
write(f1, sprint(show, "text/csv", df1))

# write a second CSV file into zip file
f2 = ZipFile.addfile(w, "x2.csv", method=ZipFile.Deflate)
write(f2, sprint(show, "text/csv", df2))

close(w)

Now we read the compressed CSV file we have written:

z = ZipFile.Reader("x.zip");
# find the index index of file called x1.csv
index_xcsv = findfirst(x -> x.name == "x1.csv", z.files)
# to read the x1.csv file in the zip file
df1_2 = CSV.read(read(z.files[index_xcsv]), DataFrame)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
15615
252106
32379
df1_2 == df1
true
# find the index index of file called x2.csv
index_xcsv = findfirst(x -> x.name == "x2.csv", z.files)
# to read the x2.csv file in the zip file
df2_2 = CSV.read(read(z.files[index_xcsv]), DataFrame)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
19876
23233
310548
df2_2 == df2
true

Note that once you read a given file from z object its stream is all used-up (reaching its end). Therefore to read it again you need to close the file object z and open it again. Also do not forget to close the zip file once you are done.

close(z)

Remove generated zip file

rm("x.zip")

This notebook was generated using Literate.jl.