Create Table in Hive with one file

There are many possible solutions:

1) Add distribute by partition key at the end of your query. Maybe there are many partitions per reducer and each reducer creates files for each partition. This may reduce the number of files and memory consumption as well. hive.exec.reducers.bytes.per.reducer setting will define how much data each reducer will process.

2) Simple, quite good if there are not too much data: add order by to force single reducer. Or increase hive.exec.reducers.bytes.per.reducer=500000000; –500M files. This is for single reducer solution is for not too much data, it will run slow if there are a lot of data.

If your task is map-only then better consider options 3-5:

3) If running on mapreduce, switch-on merge:

set hive.merge.mapfiles=true;
set hive.merge.mapredfiles=true;
set hive.merge.size.per.task=500000000;  --Size of merged files at the end of the job
set hive.merge.smallfiles.avgsize=500000000; --When the average output file size of a job is less than this number, 
--Hive will start an additional map-reduce job to merge the output files into bigger files

4) When running on Tez

set hive.merge.tezfiles=true; 
set hive.merge.size.per.task=500000000;
set hive.merge.smallfiles.avgsize=500000000;

5) For ORC files you can merge files efficiently using this command:
ALTER TABLE T [PARTITION partition_spec] CONCATENATE; – for ORC

Leave a Comment