Load and save DataFrames#

We do not cover all features of the packages. Please refer to their documentation to learn them.

Here we’ll load CSV.jl to read and write CSV files and Arrow.jl, JLSO.jl, and serialization, which allow us to work with a binary format and JSONTables.jl for JSON interaction. Finally we consider a custom JDF.jl format.

using DataFrames
using Arrow
using CSV
using Serialization
using JLSO
using JSONTables
using CodecZlib
using ZipFile
using JDF
using StatsPlots ## for charts
using Mmap ## for compression

Let’s create a simple DataFrame for testing purposes,

x = DataFrame(
    A=[true, false, true], B=[1, 2, missing],
    C=[missing, "b", "c"], D=['a', missing, 'c']
)
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc

and use eltypes to look at the columnwise types.

eltype.(eachcol(x))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

CSV.jl#

Let’s use CSV to save x to disk; make sure x1.csv does not conflict with some file in your working directory.

CSV.write("x1.csv", x)
"x1.csv"

Now we can see how it was saved by reading x.csv.

print(read("x1.csv", String))
A,B,C,D
true,1,,a
false,2,b,
true,,c,c

We can also load it back as a data frame

y = CSV.read("x1.csv", DataFrame)
3×4 DataFrame
RowABCD
BoolInt64?String1?String1?
1true1missinga
2false2bmissing
3truemissingcc

Note that when loading in a DataFrame from a CSV the column type for columns :C :D have changed to use special strings defined in the InlineStrings.jl package.

eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, InlineStrings.String1}
 Union{Missing, InlineStrings.String1}

Serialization by JDF.jl and JLSO.jl#

Now we use serialization to save x.

There are two ways to perform serialization. The first way is to use the Serialization.serialize as below:

Note that in general, this process will not work if the reading and writing are done by different versions of Julia, or an instance of Julia with a different system image.

open("x.bin", "w") do io
    serialize(io, x)
end

Now we load back the saved file to y variable. Again y is identical to x. However, please beware that if you session does not have DataFrames.jl loaded, then it may not recognize the content as DataFrames.jl

y = open(deserialize, "x.bin")
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

JDF.jl#

JDF.jl is a relatively new package designed to serialize DataFrames. You can save a DataFrame with the savejdf function. For more details about design assumptions and limitations of JDF.jl please check out xiaodaigh/JDF.jl.

JDF.save("x.jdf", x);

To load the saved JDF file, one can use the loadjdf function

x_loaded = JDF.load("x.jdf") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc

You can see that they are the same

isequal(x_loaded, x)
true

JDF.jl offers the ability to load only certain columns from disk to help with working with large files. set up a JDFFile which is a on disk representation of x backed by JDF.jl

x_ondisk = jdf"x.jdf"
JDF.JDFFile{String}("x.jdf")

We can see all the names of x without loading it into memory

names(x_ondisk)
4-element Vector{Symbol}:
 :A
 :B
 :C
 :D

The below is an example of how to load only columns :A and :D

xd = JDF.load(x_ondisk; cols=["A", "D"]) |> DataFrame
3×2 DataFrame
RowAD
BoolChar?
1truea
2falsemissing
3truec

JLSO.jl#

Another way to perform serialization is by using the JLSO.jl library:

JLSO.save("x.jlso", :data => x)

Now we can load back the file to y

y = JLSO.load("x.jlso")[:data]
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

JSONTables.jl#

Often you might need to read and write data stored in JSON format. JSONTables.jl provides a way to process them in row-oriented or column-oriented layout. We present both options below.

open(io -> arraytable(io, x), "x1.json", "w")
106
open(io -> objecttable(io, x), "x2.json", "w")
76
print(read("x1.json", String))
[{"A":true,"B":1,"C":null,"D":"a"},{"A":false,"B":2,"C":"b","D":null},{"A":true,"B":null,"C":"c","D":"c"}]
print(read("x2.json", String))
{"A":[true,false,true],"B":[1,2,null],"C":[null,"b","c"],"D":["a",null,"c"]}
y1 = open(jsontable, "x1.json") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?String?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y1))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, String}
y2 = open(jsontable, "x2.json") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?String?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y2))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, String}

Arrow.jl#

Finally we use Apache Arrow format that allows, in particular, for data interchange with R or Python.

Arrow.write("x.arrow", x)
"x.arrow"
y = Arrow.Table("x.arrow") |> DataFrame
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
eltype.(eachcol(y))
4-element Vector{Type}:
 Bool
 Union{Missing, Int64}
 Union{Missing, String}
 Union{Missing, Char}

Note that columns of y are immutable

try
    y.A[1] = false
catch e
    show(e)
end
ReadOnlyMemoryError()

This is because Arrow.Table uses memory mapping and thus uses a custom vector types:

y.A
3-element Arrow.BoolVector{Bool}:
 1
 0
 1
y.B
3-element Arrow.Primitive{Union{Missing, Int64}, Vector{Int64}}:
 1
 2
  missing

You can get standard Julia Base vectors by copying a data frame

y2 = copy(y)
3×4 DataFrame
RowABCD
BoolInt64?String?Char?
1true1missinga
2false2bmissing
3truemissingcc
y2.A
3-element Vector{Bool}:
 1
 0
 1
y2.B
3-element Vector{Union{Missing, Int64}}:
 1
 2
  missing

Basic benchmarking#

Next, we’ll create some files, so be careful that you don’t already have these files in your working directory! In particular, we’ll time how long it takes us to write a DataFrame with 1000 rows and 100000 columns.

bigdf = DataFrame(rand(Bool, 10^4, 1000), :auto)

bigdf[!, 1] = Int.(bigdf[!, 1])
bigdf[!, 2] = bigdf[!, 2] .+ 0.5
bigdf[!, 3] = string.(bigdf[!, 3], ", as string")

println("First run")
First run
println("CSV.jl")
csvwrite1 = @elapsed @time CSV.write("bigdf1.csv", bigdf)
println("Serialization")
serializewrite1 = @elapsed @time open(io -> serialize(io, bigdf), "bigdf.bin", "w")
println("JDF.jl")
jdfwrite1 = @elapsed @time JDF.save("bigdf.jdf", bigdf)
println("JLSO.jl")
jlsowrite1 = @elapsed @time JLSO.save("bigdf.jlso", :data => bigdf)
println("Arrow.jl")
arrowwrite1 = @elapsed @time Arrow.write("bigdf.arrow", bigdf)
println("JSONTables.jl arraytable")
jsontablesawrite1 = @elapsed @time open(io -> arraytable(io, bigdf), "bigdf1.json", "w")
println("JSONTables.jl objecttable")
jsontablesowrite1 = @elapsed @time open(io -> objecttable(io, bigdf), "bigdf2.json", "w")
println("Second run")
println("CSV.jl")
csvwrite2 = @elapsed @time CSV.write("bigdf1.csv", bigdf)
println("Serialization")
serializewrite2 = @elapsed @time open(io -> serialize(io, bigdf), "bigdf.bin", "w")
println("JDF.jl")
jdfwrite2 = @elapsed @time JDF.save("bigdf.jdf", bigdf)
println("JLSO.jl")
jlsowrite2 = @elapsed @time JLSO.save("bigdf.jlso", :data => bigdf)
println("Arrow.jl")
arrowwrite2 = @elapsed @time Arrow.write("bigdf.arrow", bigdf)
println("JSONTables.jl arraytable")
jsontablesawrite2 = @elapsed @time open(io -> arraytable(io, bigdf), "bigdf1.json", "w")
println("JSONTables.jl objecttable")
jsontablesowrite2 = @elapsed @time open(io -> objecttable(io, bigdf), "bigdf2.json", "w")
CSV.jl
  4.527833 seconds (44.69 M allocations: 1.127 GiB, 3.17% gc time, 70.46% compilation time)
Serialization
  0.189228 seconds (224.29 k allocations: 10.825 MiB, 30.97% compilation time)
JDF.jl
  0.164016 seconds (68.08 k allocations: 147.970 MiB, 20.10% gc time, 40.75% compilation time)
JLSO.jl
  1.239898 seconds (284.52 k allocations: 20.841 MiB, 8.13% compilation time)
Arrow.jl
  6.199281 seconds (5.44 M allocations: 275.420 MiB, 0.49% gc time, 97.79% compilation time)
JSONTables.jl arraytable
 13.748195 seconds (229.62 M allocations: 5.422 GiB, 18.07% gc time, 0.10% compilation time)
JSONTables.jl objecttable
  0.609692 seconds (95.65 k allocations: 309.042 MiB, 52.71% gc time, 16.55% compilation time)
Second run
CSV.jl
  1.513077 seconds (44.40 M allocations: 1.113 GiB, 10.43% gc time)
Serialization
  0.145169 seconds (15.01 k allocations: 401.492 KiB, 8.69% compilation time)
JDF.jl
  0.110342 seconds (35.13 k allocations: 146.247 MiB, 12.07% gc time)
JLSO.jl
  1.314621 seconds (33.10 k allocations: 7.938 MiB)
Arrow.jl
  0.135883 seconds (81.35 k allocations: 5.427 MiB)
JSONTables.jl arraytable
 20.285565 seconds (229.62 M allocations: 5.422 GiB, 13.46% gc time, 0.08% compilation time)
JSONTables.jl objecttable
  0.486221 seconds (20.71 k allocations: 305.232 MiB, 56.95% gc time, 2.13% compilation time)
0.486400382
groupedbar(
    repeat(["CSV.jl", "Serialization", "JDF.jl", "JLSO.jl", "Arrow.jl", "JSONTables.jl\nobjecttable"],
        inner=2),
    [csvwrite1, csvwrite2, serializewrite1, serializewrite1, jdfwrite1, jdfwrite2,
        jlsowrite1, jlsowrite2, arrowwrite1, arrowwrite2, jsontablesowrite2, jsontablesowrite2],
    group=repeat(["1st", "2nd"], outer=6),
    ylab="Second",
    title="Write Performance\nDataFrame: bigdf\nSize: $(size(bigdf))"
)
_images/2aa91c8e9f0e69b98afb54e9355f997af8776b6a37b0b5e181acc783bce9f4c9.png
data_files = ["bigdf1.csv", "bigdf.bin", "bigdf.arrow", "bigdf1.json", "bigdf2.json"]
df = DataFrame(file=data_files, size=getfield.(stat.(data_files), :size))
append!(df, DataFrame(file="bigdf.jdf", size=reduce((x, y) -> x + y.size,
    stat.(joinpath.("bigdf.jdf", readdir("bigdf.jdf"))),
    init=0)))
sort!(df, :size)
6×2 DataFrame
Rowfilesize
StringInt64
1bigdf.arrow1742810
2bigdf.bin5198859
3bigdf.jdf5221937
4bigdf1.csv55086709
5bigdf2.json55090710
6bigdf1.json124031817
@df df plot(:file, :size / 1024^2, seriestype=:bar, title="Format File Size (MB)", label="Size", ylab="MB")
_images/d708f846195eb16c6de3addb595cdaf5ae6c89767f9652a1d965533a5b5c0a09.png
println("First run")
println("CSV.jl")
csvread1 = @elapsed @time CSV.read("bigdf1.csv", DataFrame)
println("Serialization")
serializeread1 = @elapsed @time open(deserialize, "bigdf.bin")
println("JDF.jl")
jdfread1 = @elapsed @time JDF.load("bigdf.jdf") |> DataFrame
println("JLSO.jl")
jlsoread1 = @elapsed @time JLSO.load("bigdf.jlso")
println("Arrow.jl")
arrowread1 = @elapsed @time df_tmp = Arrow.Table("bigdf.arrow") |> DataFrame
arrowread1copy = @elapsed @time copy(df_tmp)
println("JSONTables.jl arraytable")
jsontablesaread1 = @elapsed @time open(jsontable, "bigdf1.json")
println("JSONTables.jl objecttable")
jsontablesoread1 = @elapsed @time open(jsontable, "bigdf2.json")
println("Second run")
csvread2 = @elapsed @time CSV.read("bigdf1.csv", DataFrame)
println("Serialization")
serializeread2 = @elapsed @time open(deserialize, "bigdf.bin")
println("JDF.jl")
jdfread2 = @elapsed @time JDF.load("bigdf.jdf") |> DataFrame
println("JLSO.jl")
jlsoread2 = @elapsed @time JLSO.load("bigdf.jlso")
println("Arrow.jl")
arrowread2 = @elapsed @time df_tmp = Arrow.Table("bigdf.arrow") |> DataFrame
arrowread2copy = @elapsed @time copy(df_tmp)
println("JSONTables.jl arraytable")
jsontablesaread2 = @elapsed @time open(jsontable, "bigdf1.json")
println("JSONTables.jl objecttable")
jsontablesoread2 = @elapsed @time open(jsontable, "bigdf2.json");
First run
CSV.jl
  3.119118 seconds (4.30 M allocations: 227.499 MiB, 0.79% gc time, 125.62% compilation time)
Serialization
  0.414346 seconds (9.50 M allocations: 155.501 MiB, 8.56% gc time, 8.91% compilation time)
JDF.jl
  0.194057 seconds (157.95 k allocations: 157.897 MiB, 8.90% gc time, 87.70% compilation time)
JLSO.jl
  0.345261 seconds (9.52 M allocations: 158.175 MiB, 7.05% gc time, 7.83% compilation time)
Arrow.jl
  0.452379 seconds (550.37 k allocations: 26.341 MiB, 98.36% compilation time)
  0.049306 seconds (14.50 k allocations: 10.266 MiB)
JSONTables.jl arraytable
  6.044243 seconds (271.09 k allocations: 1.839 GiB, 9.92% gc time)
JSONTables.jl objecttable
  0.345980 seconds (7.43 k allocations: 403.816 MiB, 3.63% gc time, 0.02% compilation time)
Second run
  0.927421 seconds (631.57 k allocations: 43.593 MiB)
Serialization
  0.407057 seconds (9.48 M allocations: 154.596 MiB, 14.75% gc time)
JDF.jl
  0.302952 seconds (77.27 k allocations: 153.772 MiB, 84.55% gc time)
JLSO.jl
  0.343988 seconds (9.50 M allocations: 157.306 MiB, 3.53% gc time)
Arrow.jl
  0.006410 seconds (86.57 k allocations: 3.732 MiB)
  0.061872 seconds (14.50 k allocations: 10.266 MiB, 19.60% gc time)
JSONTables.jl arraytable
  5.956783 seconds (271.09 k allocations: 1.839 GiB, 10.02% gc time)
JSONTables.jl objecttable
  0.331426 seconds (7.08 k allocations: 403.800 MiB, 1.02% gc time)

Exclude JSONTables due to much longer timing

groupedbar(
    repeat(["CSV.jl", "Serialization", "JDF.jl", "JLSO.jl", "Arrow.jl", "Arrow.jl\ncopy", ##"JSON\narraytable",
            "JSON\nobjecttable"], inner=2),
    [csvread1, csvread2, serializeread1, serializeread2, jdfread1, jdfread2, jlsoread1, jlsoread2,
        arrowread1, arrowread2, arrowread1 + arrowread1copy, arrowread2 + arrowread2copy,
        # jsontablesaread1, jsontablesaread2,
        jsontablesoread1, jsontablesoread2],
    group=repeat(["1st", "2nd"], outer=7),
    ylab="Second",
    title="Read Performance\nDataFrame: bigdf\nSize: $(size(bigdf))"
)
_images/38587964f8288a8e75690ddd4bb645d1690f763bdd9c28f4307b715c515d7b3d.png

Using gzip compression#

A common user requirement is to be able to load and save CSV that are compressed using gzip. Below we show how this can be accomplished using CodecZlib.jl. The same pattern is applicable to JSONTables.jl compression/decompression. Again make sure that you do not have file named df_compress_test.csv.gz in your working directory. We first generate a random data frame.

df = DataFrame(rand(1:10, 10, 1000), :auto)
10×1000 DataFrame
900 columns omitted
Rowx1x2x3x4x5x6x7x8x9x10x11x12x13x14x15x16x17x18x19x20x21x22x23x24x25x26x27x28x29x30x31x32x33x34x35x36x37x38x39x40x41x42x43x44x45x46x47x48x49x50x51x52x53x54x55x56x57x58x59x60x61x62x63x64x65x66x67x68x69x70x71x72x73x74x75x76x77x78x79x80x81x82x83x84x85x86x87x88x89x90x91x92x93x94x95x96x97x98x99x100
Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64
17713351052944719366531079354242828652646979521079593198984345664128796127847532710256737109614338721654410
238109591045747168997910592849591021636513519312156912161037628497773531054511922555227371516344141018316648
357115475271731075191110521035107977613756844184318101104108948110159323910810106824818101899329936386621018464916
410391034108542102886491556493834453152775385537959584104410878366987944531728841846198417642310467598776665
5788666310149669181062687593683410339361261041313105102368512883728311103662148961091056614510988613210745354612
65133108482610310254772587451243862739798773594539319329610895810799103535283566256310685463332729513463710110
773612325910727611011932165745849411555443122692108107735212812637946544843104474541327333110410674745912712
87669110515546387108794861091059715946436674258268914378106312617154151056310310516738215364342410333543426519
94267712782101269814810106488125775101081071451092101628758103891051089106108986934261023697714996576691069101013361046
108566910511095866582454955107313766962711021010936924497724159107141051091065951051074810210576231721817128867106572

GzipCompressorStream comes from CodecZlib

open("df_compress_test.csv.gz", "w") do io
    stream = GzipCompressorStream(io)
    CSV.write(stream, df)
    close(stream)
end
df2 = CSV.File(transcode(GzipDecompressor, Mmap.mmap("df_compress_test.csv.gz"))) |> DataFrame
10×1000 DataFrame
900 columns omitted
Rowx1x2x3x4x5x6x7x8x9x10x11x12x13x14x15x16x17x18x19x20x21x22x23x24x25x26x27x28x29x30x31x32x33x34x35x36x37x38x39x40x41x42x43x44x45x46x47x48x49x50x51x52x53x54x55x56x57x58x59x60x61x62x63x64x65x66x67x68x69x70x71x72x73x74x75x76x77x78x79x80x81x82x83x84x85x86x87x88x89x90x91x92x93x94x95x96x97x98x99x100
Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64Int64
17713351052944719366531079354242828652646979521079593198984345664128796127847532710256737109614338721654410
238109591045747168997910592849591021636513519312156912161037628497773531054511922555227371516344141018316648
357115475271731075191110521035107977613756844184318101104108948110159323910810106824818101899329936386621018464916
410391034108542102886491556493834453152775385537959584104410878366987944531728841846198417642310467598776665
5788666310149669181062687593683410339361261041313105102368512883728311103662148961091056614510988613210745354612
65133108482610310254772587451243862739798773594539319329610895810799103535283566256310685463332729513463710110
773612325910727611011932165745849411555443122692108107735212812637946544843104474541327333110410674745912712
87669110515546387108794861091059715946436674258268914378106312617154151056310310516738215364342410333543426519
94267712782101269814810106488125775101081071451092101628758103891051089106108986934261023697714996576691069101013361046
108566910511095866582454955107313766962711021010936924497724159107141051091065951051074810210576231721817128867106572
df == df2
true

Using zip files#

Sometimes you may have files compressed inside a zip file. In such a situation you may use ZipFile.jl in conjunction an an appropriate reader to read the files. Here we first create a ZIP file and then read back its contents into a DataFrame.

df1 = DataFrame(rand(1:10, 3, 4), :auto)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
17724
24938
381048
df2 = DataFrame(rand(1:10, 3, 4), :auto)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
19823
2105107
34323

And we show yet another way to write a DataFrame into a CSV file: Writing a CSV file into the zip file

w = ZipFile.Writer("x.zip")

f1 = ZipFile.addfile(w, "x1.csv")
write(f1, sprint(show, "text/csv", df1))

# write a second CSV file into zip file
f2 = ZipFile.addfile(w, "x2.csv", method=ZipFile.Deflate)
write(f2, sprint(show, "text/csv", df2))

close(w)

Now we read the compressed CSV file we have written:

z = ZipFile.Reader("x.zip");
# find the index index of file called x1.csv
index_xcsv = findfirst(x -> x.name == "x1.csv", z.files)
# to read the x1.csv file in the zip file
df1_2 = CSV.read(read(z.files[index_xcsv]), DataFrame)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
17724
24938
381048
df1_2 == df1
true
# find the index index of file called x2.csv
index_xcsv = findfirst(x -> x.name == "x2.csv", z.files)
# to read the x2.csv file in the zip file
df2_2 = CSV.read(read(z.files[index_xcsv]), DataFrame)
3×4 DataFrame
Rowx1x2x3x4
Int64Int64Int64Int64
19823
2105107
34323
df2_2 == df2
true

Note that once you read a given file from z object its stream is all used-up (reaching its end). Therefore to read it again you need to close the file object z and open it again. Also do not forget to close the zip file once you are done.

close(z)

Remove generated files

rm("x.arrow")
rm("x.bin")
rm("x.zip")
rm("x.jlso")
rm("x1.csv")
rm("x1.json")
rm("x2.json")
rm("x.jdf", recursive=true)
rm("bigdf.jdf", recursive=true)
rm("df_compress_test.csv.gz")
rm("bigdf1.json")
rm("bigdf1.csv")
rm("bigdf2.json")
rm("bigdf.jlso")
rm("bigdf.bin")
rm("bigdf.arrow")

This notebook was generated using Literate.jl.