Use msck repair table command:
MSCK [REPAIR] TABLE tablename;
or
ALTER TABLE tablename RECOVER PARTITIONS;
if you are running Hive on EMR.
Read more details about both commands here: RECOVER PARTITIONS
Related Contents:
- How does Hadoop process records split across block boundaries?
- What is the difference between partitioning and bucketing a table in Hive ?
- Difference between Hive internal tables and external tables?
- Name node is in safe mode. Not able to leave
- How to transpose/pivot data in hive?
- How does Hadoop Namenode failover process works?
- how many mappers and reduces will get created for a partitoned table in hive
- How to fix corrupt HDFS FIles
- Sqoop import : composite primary key and textual primary key
- How to delete and update a record in Hive
- Hive unable to manually set number of reducers
- How to load data to hive from HDFS without removing the source file?
- Spark iterate HDFS directory
- Default Namenode port of HDFS is 50070.But I have come across at some places 8020 or 9000 [closed]
- How does Hadoop perform input splits?
- Loading Data from a .txt file to Table Stored as ORC in Hive
- Hadoop: …be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation
- Write to multiple outputs by key Spark – one Spark job
- Hive: Best way to do incremetal updates on a main table
- Failed to locate the winutils binary in the hadoop binary path
- merge output files after reduce phase
- Read whole text files from a compression in Spark
- Write a file in hdfs with Java
- How to control partition size in Spark SQL
- How to optimize partitioning when migrating data from JDBC source?
- What is the purpose of shuffling and sorting phase in the reducer in Map Reduce Programming?
- How to open/stream .zip files through Spark?
- How do I output the results of a HiveQL query to CSV?
- hadoop map reduce secondary sorting
- Is it better to use the mapred or the mapreduce package to create a Hadoop Job?
- Chaining multiple MapReduce jobs in Hadoop
- Permission denied at hdfs
- There are 0 datanode(s) running and no node(s) are excluded in this operation
- Create Table in Hive with one file
- How to create n number of external tables with a single hdfs path using Hive
- java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics
- Apache Spark: The number of cores vs. the number of executors
- How to Access Hive via Python?
- The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw- (on Windows)
- data block size in HDFS, why 64MB?
- Pyspark: get list of files/directories on HDFS path
- Apache Hadoop Yarn – Underutilization of cores
- How to access s3a:// files from Apache Spark?
- How to access s3a:// files from Apache Spark?
- Where are logs in Spark on YARN?
- Is it better to have one large parquet file or lots of smaller parquet files?
- Hadoop speculative task execution
- Parallel Algorithms for Generating Prime Numbers (possibly using Hadoop’s map reduce)
- How do I Combine or Merge Small ORC files into Larger ORC file?
- Merging multiple files into one within Hadoop