读取配置单元表文件时出现字符编码问题

ijnw1ujt  于 2021-06-26  发布在  Hive
关注(0)|答案(0)|浏览(243)

我正面临着一个令人难以置信的(对我来说)问题,而试图阅读兽人的文件。默认情况下,配置单元orc文件采用“utf-8”编码,或者至少应该是这样。我正在对orc文件进行本地复制,并尝试用java读取orc文件。
我能够成功地读取文件,尽管它有一些不需要的字符:

在配置单元中查询表时,没有不需要的字符:

有人能帮忙吗?我尝试过各种格式的解码和编码,比如(iso-8859-1到utf-8),(utf-8到iso-8859-1),(iso-8859-1到utf-16)等等。
编辑:
您好,我正在使用以下java代码读取orc文件:

import org.apache.hadoop.hive.ql.io.orc.Reader;
import org.apache.hadoop.hive.ql.io.orc.RecordReader;
import org.apache.hadoop.hive.serde2.objectinspector.StructField;
import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;

public class OrcFormat {
    public static void main(String[] argv)
    {
        System.out.println(System.getProperty("file.encoding").toString());
        System.out.println(Charset.defaultCharset().name());

        try {
            Configuration conf = new Configuration();
            Utils.createFile("C:/path/target","opfile.txt","UTF-8");
            Reader reader = OrcFile.createReader(new Path("C:/input/000000_0"),OrcFile.readerOptions(conf));

            StructObjectInspector inspector = (StructObjectInspector)reader.getObjectInspector();

            List<String> keys = reader.getMetadataKeys();
            for(int i=0;i<keys.size();i++){
                System.out.println("Key:"+keys.get(i)+",Value:"+reader.getMetadataValue(keys.get(i)));
            }

            RecordReader records = reader.rows();
            Object row = null;

            List fields = inspector.getAllStructFieldRefs();
            for(int i = 0; i < fields.size(); ++i) {
                System.out.print(((StructField)fields.get(i)).getFieldObjectInspector().getTypeName() + '\t');

            }
            System.out.println();
            int rCnt=0;
            while(records.hasNext())
            {
                row = records.next(row);
                List value_lst = inspector.getStructFieldsDataAsList(row);
                String out = "";

                for(Object field : value_lst) {
                    if(field != null)
                        out+=field;
                    out+="\t";
                }
                rCnt++;

                out = out+"\n";
                byte[] outA = convertEncoding(out,"UTF-8","UTF-8");
                Utils.writeToFile(outA,"C:/path/target","opfile.txt","UTF-8");
                if(rCnt<10){
                    System.out.println(out);
                    System.out.println(new String(outA));
                }else{
                    break;
                }
            }
        }catch (Exception e)
        {
            e.printStackTrace();
        }
    }   

    public static byte[] convertEncoding(String s,String inCharset,String outCharset){
        Charset inC = Charset.forName(inCharset);
        Charset outC = Charset.forName(outCharset);
        ByteBuffer inpBuffer = ByteBuffer.wrap(s.getBytes());
        CharBuffer data = inC.decode(inpBuffer);

        ByteBuffer opBuffer = outC.encode(data);
        byte[] opData = opBuffer.array();
        return opData;
    }
}

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题