Load and save DataFrames#

We do not cover all features of the packages. Please refer to their documentation to learn them.

Here we’ll load CSV.jl to read and write CSV files and Arrow.jl, JLSO.jl, and serialization, which allow us to work with a binary format and JSONTables.jl for JSON interaction. Finally we consider a custom JDF.jl format.

using DataFrames
using Arrow
using CSV
using Serialization
using JLSO
using JSONTables
using CodecZlib
using ZipFile
using JDF
using StatsPlots ## for charts
using Mmap ## for compression

Let’s create a simple DataFrame for testing purposes,

x = DataFrame(
    A=[true, false, true], B=[1, 2, missing],
    C=[missing, "b", "c"], D=['a', missing, 'c']
)
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc

and use eltypes to look at the columnwise types.

eltype.(eachcol(x))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

CSV.jl#

Let’s use CSV to save x to disk; make sure x1.csv does not conflict with some file in your working directory.

CSV.write("x1.csv", x)
"x1.csv"

Now we can see how it was saved by reading x.csv.

print(read("x1.csv", String))
A,B,C,D
true,1,,a
false,2,b,
true,,c,c

We can also load it back as a data frame

y = CSV.read("x1.csv", DataFrame)
3×4 DataFrame
RowABCD
BoolInt64?String1?String1?
1true1missinga
2false2bmissing
3truemissingcc

Note that when loading in a DataFrame from a CSV the column type for columns :C :D have changed to use special strings defined in the InlineStrings.jl package.

eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, InlineStrings.String1}
 Union{Missing, InlineStrings.String1}

Serialization by JDF.jl and JLSO.jl#

Now we use serialization to save x.

There are two ways to perform serialization. The first way is to use the Serialization.serialize as below:

Note that in general, this process will not work if the reading and writing are done by different versions of Julia, or an instance of Julia with a different system image.

open("x.bin", "w") do io
    serialize(io, x)
end

Now we load back the saved file to y variable. Again y is identical to x. However, please beware that if you session does not have DataFrames.jl loaded, then it may not recognize the content as DataFrames.jl

y = open(deserialize, "x.bin")
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

JDF.jl#

JDF.jl is a relatively new package designed to serialize DataFrames. You can save a DataFrame with the savejdf function. For more details about design assumptions and limitations of JDF.jl please check out xiaodaigh/JDF.jl.

JDF.save("x.jdf", x);

To load the saved JDF file, one can use the loadjdf function

x_loaded = JDF.load("x.jdf") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc

You can see that they are the same

isequal(x_loaded, x)
true

JDF.jl offers the ability to load only certain columns from disk to help with working with large files. set up a JDFFile which is a on disk representation of x backed by JDF.jl

x_ondisk = jdf"x.jdf"
JDF.JDFFile{String}("x.jdf")

We can see all the names of x without loading it into memory

names(x_ondisk)
4-element Vector{Symbol}:
 :A
 :B
 :C
 :D

The below is an example of how to load only columns :A and :D

xd = JDF.load(x_ondisk; cols=["A", "D"]) |> DataFrame
3×2 DataFrame
RowAD
BoolChar?
1truea
2falsemissing
3truec

JLSO.jl#

Another way to perform serialization is by using the JLSO.jl library:

JLSO.save("x.jlso", :data => x)

Now we can load back the file to y

y = JLSO.load("x.jlso")[:data]
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

JSONTables.jl#

Often you might need to read and write data stored in JSON format. JSONTables.jl provides a way to process them in row-oriented or column-oriented layout. We present both options below.

open(io -> arraytable(io, x), "x1.json", "w")
106
open(io -> objecttable(io, x), "x2.json", "w")
76
print(read("x1.json", String))
[{"A":true,"B":1,"C":null,"D":"a"},{"A":false,"B":2,"C":"b","D":null},{"A":true,"B":null,"C":"c","D":"c"}]
print(read("x2.json", String))
{"A":[true,false,true],"B":[1,2,null],"C":[null,"b","c"],"D":["a",null,"c"]}
y1 = open(jsontable, "x1.json") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?String?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y1))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, String}
y2 = open(jsontable, "x2.json") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?String?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y2))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, String}

Arrow.jl#

Finally we use Apache Arrow format that allows, in particular, for data interchange with R or Python.

Arrow.write("x.arrow", x)
"x.arrow"
y = Arrow.Table("x.arrow") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

Note that columns of y are immutable

try
    y.A[1] = false
catch e
    show(e)
end
ReadOnlyMemoryError()

This is because Arrow.Table uses memory mapping and thus uses a custom vector types:

y.A
3-element Arrow.BoolVector{Bool}:
 1
 0
 1
y.B
3-element Arrow.Primitive{Union{Missing, Int64}, Vector{Int64}}:
 1
 2
  missing

You can get standard Julia Base vectors by copying a data frame

y2 = copy(y)
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
y2.A
3-element Vector{Bool}:
 1
 0
 1
y2.B
3-element Vector{Union{Missing, Int64}}:
 1
 2
  missing

Basic benchmarking#

Next, we’ll create some files, so be careful that you don’t already have these files in your working directory! In particular, we’ll time how long it takes us to write a DataFrame with 1000 rows and 100000 columns.

bigdf = DataFrame(rand(Bool, 10^4, 1000), :auto)

bigdf[!, 1] = Int.(bigdf[!, 1])
bigdf[!, 2] = bigdf[!, 2] .+ 0.5
bigdf[!, 3] = string.(bigdf[!, 3], ", as string")

println("First run")
First run
println("CSV.jl")
csvwrite1 = @elapsed @time CSV.write("bigdf1.csv", bigdf)
println("Serialization")
serializewrite1 = @elapsed @time open(io -> serialize(io, bigdf), "bigdf.bin", "w")
println("JDF.jl")
jdfwrite1 = @elapsed @time JDF.save("bigdf.jdf", bigdf)
println("JLSO.jl")
jlsowrite1 = @elapsed @time JLSO.save("bigdf.jlso", :data => bigdf)
println("Arrow.jl")
arrowwrite1 = @elapsed @time Arrow.write("bigdf.arrow", bigdf)
println("JSONTables.jl arraytable")
jsontablesawrite1 = @elapsed @time open(io -> arraytable(io, bigdf), "bigdf1.json", "w")
println("JSONTables.jl objecttable")
jsontablesowrite1 = @elapsed @time open(io -> objecttable(io, bigdf), "bigdf2.json", "w")
println("Second run")
println("CSV.jl")
csvwrite2 = @elapsed @time CSV.write("bigdf1.csv", bigdf)
println("Serialization")
serializewrite2 = @elapsed @time open(io -> serialize(io, bigdf), "bigdf.bin", "w")
println("JDF.jl")
jdfwrite2 = @elapsed @time JDF.save("bigdf.jdf", bigdf)
println("JLSO.jl")
jlsowrite2 = @elapsed @time JLSO.save("bigdf.jlso", :data => bigdf)
println("Arrow.jl")
arrowwrite2 = @elapsed @time Arrow.write("bigdf.arrow", bigdf)
println("JSONTables.jl arraytable")
jsontablesawrite2 = @elapsed @time open(io -> arraytable(io, bigdf), "bigdf1.json", "w")
println("JSONTables.jl objecttable")
jsontablesowrite2 = @elapsed @time open(io -> objecttable(io, bigdf), "bigdf2.json", "w")
CSV.jl
  4.488136 seconds (44.69 M allocations: 1.127 GiB, 3.56% gc time, 69.19% compilation time)
Serialization
  0.187943 seconds (225.03 k allocations: 10.920 MiB, 30.52% compilation time)
JDF.jl
  0.139347 seconds (68.17 k allocations: 147.988 MiB, 17.79% gc time, 42.29% compilation time)
JLSO.jl
  1.033582 seconds (285.30 k allocations: 20.915 MiB, 6.55% compilation time)
Arrow.jl
  3.900311 seconds (5.44 M allocations: 276.252 MiB, 0.57% gc time, 97.68% compilation time)
JSONTables.jl arraytable
 11.640616 seconds (229.62 M allocations: 5.422 GiB, 18.80% gc time, 0.08% compilation time)
JSONTables.jl objecttable
  0.538184 seconds (95.82 k allocations: 309.043 MiB, 50.06% gc time, 14.86% compilation time)
Second run
CSV.jl
  1.439092 seconds (44.40 M allocations: 1.113 GiB, 8.88% gc time)
Serialization
  0.141831 seconds (15.01 k allocations: 402.133 KiB, 6.73% compilation time)
JDF.jl
  0.105549 seconds (35.13 k allocations: 146.246 MiB, 12.05% gc time)
JLSO.jl
  0.968085 seconds (33.10 k allocations: 7.944 MiB, 0.20% gc time)
Arrow.jl
  0.096617 seconds (81.35 k allocations: 5.427 MiB)
JSONTables.jl arraytable
 11.691991 seconds (229.62 M allocations: 5.422 GiB, 17.37% gc time, 0.08% compilation time)
JSONTables.jl objecttable
  0.537946 seconds (20.71 k allocations: 305.235 MiB, 57.70% gc time, 2.02% compilation time)
0.53811979
groupedbar(
    repeat(["CSV.jl", "Serialization", "JDF.jl", "JLSO.jl", "Arrow.jl", "JSONTables.jl\nobjecttable"],
        inner=2),
    [csvwrite1, csvwrite2, serializewrite1, serializewrite1, jdfwrite1, jdfwrite2,
        jlsowrite1, jlsowrite2, arrowwrite1, arrowwrite2, jsontablesowrite2, jsontablesowrite2],
    group=repeat(["1st", "2nd"], outer=6),
    ylab="Second",
    title="Write Performance\nDataFrame: bigdf\nSize: $(size(bigdf))"
)
_images/ab8fa9a0c1ad04f3a4a43cbeb8add2e311ce0aea9cfa7d8220a3d97d8c8c7c75.png
data_files = ["bigdf1.csv", "bigdf.bin", "bigdf.arrow", "bigdf1.json", "bigdf2.json"]
df = DataFrame(file=data_files, size=getfield.(stat.(data_files), :size))
append!(df, DataFrame(file="bigdf.jdf", size=reduce((x, y) -> x + y.size,
    stat.(joinpath.("bigdf.jdf", readdir("bigdf.jdf"))),
    init=0)))
sort!(df, :size)
6×2 DataFrame
Rowfilesize
StringInt64
1bigdf.arrow1742818
2bigdf.bin5199518
3bigdf.jdf5222656
4bigdf1.csv55087053
5bigdf2.json55091054
6bigdf1.json124032161
@df df plot(:file, :size / 1024^2, seriestype=:bar, title="Format File Size (MB)", label="Size", ylab="MB")
_images/59672f6e31c35cb13525bc63bd6dffbaec3e0134a1c62875f2f52da5bf9fb8ea.png
println("First run")
println("CSV.jl")
csvread1 = @elapsed @time CSV.read("bigdf1.csv", DataFrame)
println("Serialization")
serializeread1 = @elapsed @time open(deserialize, "bigdf.bin")
println("JDF.jl")
jdfread1 = @elapsed @time JDF.load("bigdf.jdf") |> DataFrame
println("JLSO.jl")
jlsoread1 = @elapsed @time JLSO.load("bigdf.jlso")
println("Arrow.jl")
arrowread1 = @elapsed @time df_tmp = Arrow.Table("bigdf.arrow") |> DataFrame
arrowread1copy = @elapsed @time copy(df_tmp)
println("JSONTables.jl arraytable")
jsontablesaread1 = @elapsed @time open(jsontable, "bigdf1.json")
println("JSONTables.jl objecttable")
jsontablesoread1 = @elapsed @time open(jsontable, "bigdf2.json")
println("Second run")
csvread2 = @elapsed @time CSV.read("bigdf1.csv", DataFrame)
println("Serialization")
serializeread2 = @elapsed @time open(deserialize, "bigdf.bin")
println("JDF.jl")
jdfread2 = @elapsed @time JDF.load("bigdf.jdf") |> DataFrame
println("JLSO.jl")
jlsoread2 = @elapsed @time JLSO.load("bigdf.jlso")
println("Arrow.jl")
arrowread2 = @elapsed @time df_tmp = Arrow.Table("bigdf.arrow") |> DataFrame
arrowread2copy = @elapsed @time copy(df_tmp)
println("JSONTables.jl arraytable")
jsontablesaread2 = @elapsed @time open(jsontable, "bigdf1.json")
println("JSONTables.jl objecttable")
jsontablesoread2 = @elapsed @time open(jsontable, "bigdf2.json");
First run
CSV.jl
  3.225651 seconds (4.28 M allocations: 227.503 MiB, 1.91% gc time, 125.19% compilation time)
Serialization
  0.403671 seconds (9.50 M allocations: 155.502 MiB, 5.82% gc time, 9.78% compilation time)
JDF.jl
  0.249459 seconds (158.15 k allocations: 157.899 MiB, 14.06% gc time, 87.88% compilation time)
JLSO.jl
  0.384422 seconds (9.52 M allocations: 158.178 MiB, 7.85% gc time, 8.53% compilation time)
Arrow.jl
  0.690264 seconds (549.90 k allocations: 26.354 MiB, 98.70% compilation time)
  0.049401 seconds (14.50 k allocations: 10.266 MiB)
JSONTables.jl arraytable
  6.371331 seconds (271.14 k allocations: 1.839 GiB, 11.03% gc time, 0.09% compilation time)
JSONTables.jl objecttable
  0.365395 seconds (7.44 k allocations: 403.824 MiB, 4.36% gc time, 0.02% compilation time)
Second run
  0.938707 seconds (631.63 k allocations: 43.594 MiB)
Serialization
  0.347297 seconds (9.48 M allocations: 154.596 MiB, 5.99% gc time)
JDF.jl
  0.407658 seconds (77.27 k allocations: 153.769 MiB, 87.73% gc time)
JLSO.jl
  0.320553 seconds (9.50 M allocations: 157.310 MiB, 1.82% gc time)
Arrow.jl
  0.006719 seconds (86.57 k allocations: 3.732 MiB)
  0.062983 seconds (14.50 k allocations: 10.266 MiB, 20.65% gc time)
JSONTables.jl arraytable
  6.297805 seconds (271.09 k allocations: 1.839 GiB, 12.05% gc time)
JSONTables.jl objecttable
  0.343348 seconds (7.08 k allocations: 403.804 MiB, 1.42% gc time)

Exclude JSONTables due to much longer timing

groupedbar(
    repeat(["CSV.jl", "Serialization", "JDF.jl", "JLSO.jl", "Arrow.jl", "Arrow.jl\ncopy", ##"JSON\narraytable",
            "JSON\nobjecttable"], inner=2),
    [csvread1, csvread2, serializeread1, serializeread2, jdfread1, jdfread2, jlsoread1, jlsoread2,
        arrowread1, arrowread2, arrowread1 + arrowread1copy, arrowread2 + arrowread2copy,
        # jsontablesaread1, jsontablesaread2,
        jsontablesoread1, jsontablesoread2],
    group=repeat(["1st", "2nd"], outer=7),
    ylab="Second",
    title="Read Performance\nDataFrame: bigdf\nSize: $(size(bigdf))"
)
_images/85cee8abb48039a02c8faafa41bdceeaf8752ab34c85abea4355c904499f5c50.png

Using gzip compression#

A common user requirement is to be able to load and save CSV that are compressed using gzip. Below we show how this can be accomplished using CodecZlib.jl. The same pattern is applicable to JSONTables.jl compression/decompression. Again make sure that you do not have file named df_compress_test.csv.gz in your working directory. We first generate a random data frame.

df = DataFrame(rand(1:10, 10, 1000), :auto)
10×1000 DataFrame
900 columns omitted
Rowx1x2x3x4x5x6x7x8x9x10x11x12x13x14x15x16x17x18x19x20x21x22x23x24x25x26x27x28x29x30x31x32x33x34x35x36x37x38x39x40x41x42x43x44x45x46x47x48x49x50x51x52x53x54x55x56x57x58x59x60x61x62x63x64x65x66x67x68x69x70x71x72x73x74x75x76x77x78x79x80x81x82x83x84x85x86x87x88x89x90x91x92x93x94x95x96x97x98x99x100
Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64
18982436311051079343588109110644784210177214101109108251097569959373107422210391541353892956610791033718279324103104
269769710832845128232284886498472417492843285522738461010810684936311253761848810108391021210123441811222581010
367485610622141043673786136811699879541922571024268284310810103381085810410110161101063101641521817311238843717561
488691094575479671015443102228667694135546438210910478178101946881149691829343492229159127102107101139105352139
561023529167116955448410882141056867645453165108108199639610551859485410411057558889936692145161081421108342513
6103333375267791510291110731062823365846104525487396578422236573632731561178734231036262109439146652105921431
7161710563321038785124884910942674666941943101010422441281041257106351083657345354924842910693519797457210268573
88874127694101016739153733975679781551822532511810418464693934134957378948136774858216446101283109591045899
98972552108574332107626710671536415671137101794696345356984251021495514496651997413451510429110535210639386410
1044593653106713223968103510273146121010109517291106597481771089108525107115719327310121610957103353108106382991085963

GzipCompressorStream comes from CodecZlib

open("df_compress_test.csv.gz", "w") do io
    stream = GzipCompressorStream(io)
    CSV.write(stream, df)
    close(stream)
end
df2 = CSV.File(transcode(GzipDecompressor, Mmap.mmap("df_compress_test.csv.gz"))) |> DataFrame
10×1000 DataFrame
900 columns omitted
Rowx1x2x3x4x5x6x7x8x9x10x11x12x13x14x15x16x17x18x19x20x21x22x23x24x25x26x27x28x29x30x31x32x33x34x35x36x37x38x39x40x41x42x43x44x45x46x47x48x49x50x51x52x53x54x55x56x57x58x59x60x61x62x63x64x65x66x67x68x69x70x71x72x73x74x75x76x77x78x79x80x81x82x83x84x85x86x87x88x89x90x91x92x93x94x95x96x97x98x99x100
Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64
18982436311051079343588109110644784210177214101109108251097569959373107422210391541353892956610791033718279324103104
269769710832845128232284886498472417492843285522738461010810684936311253761848810108391021210123441811222581010
367485610622141043673786136811699879541922571024268284310810103381085810410110161101063101641521817311238843717561
488691094575479671015443102228667694135546438210910478178101946881149691829343492229159127102107101139105352139
561023529167116955448410882141056867645453165108108199639610551859485410411057558889936692145161081421108342513
6103333375267791510291110731062823365846104525487396578422236573632731561178734231036262109439146652105921431
7161710563321038785124884910942674666941943101010422441281041257106351083657345354924842910693519797457210268573
88874127694101016739153733975679781551822532511810418464693934134957378948136774858216446101283109591045899
98972552108574332107626710671536415671137101794696345356984251021495514496651997413451510429110535210639386410
1044593653106713223968103510273146121010109517291106597481771089108525107115719327310121610957103353108106382991085963
df == df2
true

Using zip files#

Sometimes you may have files compressed inside a zip file. In such a situation you may use ZipFile.jl in conjunction an an appropriate reader to read the files. Here we first create a ZIP file and then read back its contents into a DataFrame.

df1 = DataFrame(rand(1:10, 3, 4), :auto)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
12166
288103
349310
df2 = DataFrame(rand(1:10, 3, 4), :auto)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
16494
28251
36356

And we show yet another way to write a DataFrame into a CSV file: Writing a CSV file into the zip file

w = ZipFile.Writer("x.zip")

f1 = ZipFile.addfile(w, "x1.csv")
write(f1, sprint(show, "text/csv", df1))

# write a second CSV file into zip file
f2 = ZipFile.addfile(w, "x2.csv", method=ZipFile.Deflate)
write(f2, sprint(show, "text/csv", df2))

close(w)

Now we read the compressed CSV file we have written:

z = ZipFile.Reader("x.zip");
# find the index index of file called x1.csv
index_xcsv = findfirst(x -> x.name == "x1.csv", z.files)
# to read the x1.csv file in the zip file
df1_2 = CSV.read(read(z.files[index_xcsv]), DataFrame)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
12166
288103
349310
df1_2 == df1
true
# find the index index of file called x2.csv
index_xcsv = findfirst(x -> x.name == "x2.csv", z.files)
# to read the x2.csv file in the zip file
df2_2 = CSV.read(read(z.files[index_xcsv]), DataFrame)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
16494
28251
36356
df2_2 == df2
true

Note that once you read a given file from z object its stream is all used-up (reaching its end). Therefore to read it again you need to close the file object z and open it again. Also do not forget to close the zip file once you are done.

close(z)

Remove generated files

rm("x.arrow")
rm("x.bin")
rm("x.zip")
rm("x.jlso")
rm("x1.csv")
rm("x1.json")
rm("x2.json")
rm("x.jdf", recursive=true)
rm("bigdf.jdf", recursive=true)
rm("df_compress_test.csv.gz")
rm("bigdf1.json")
rm("bigdf1.csv")
rm("bigdf2.json")
rm("bigdf.jlso")
rm("bigdf.bin")
rm("bigdf.arrow")

This notebook was generated using Literate.jl.