忽略pandas.read_csv()中损坏header= keyword的错误数据行

vbkedwbf  于 2022-11-27  发布在  其他
关注(0)|答案(3)|浏览(194)

我有一系列非常混乱的 *.csv文件,它们被Pandas读入。一个示例csv是:

Instrument 35392
"Log File Name : station"
"Setup Date (MMDDYY) : 031114"
"Setup Time (HHMMSS) : 073648"
"Starting Date (MMDDYY) : 031114"
"Starting Time (HHMMSS) : 090000"
"Stopping Date (MMDDYY) : 031115"
"Stopping Time (HHMMSS) : 235959"
"Interval (HHMMSS) : 010000"
"Sensor warmup (HHMMSS) : 000200"
"Circltr warmup (HHMMSS) : 000200" 

"Date","Time","","Temp","","SpCond","","Sal","","IBatt",""
"MMDDYY","HHMMSS","","øC","","mS/cm","","ppt","","Volts",""

"Random message here 031114 073721 to 031114 083200"
03/11/14,09:00:00,"",15.85,"",1.408,"",.74,"",6.2,""
03/11/14,10:00:00,"",15.99,"",1.96,"",1.05,"",6.3,""
03/11/14,11:00:00,"",14.2,"",40.8,"",26.12,"",6.2,""
03/11/14,12:00:01,"",14.2,"",41.7,"",26.77,"",6.2,""
03/11/14,13:00:00,"",14.5,"",41.3,"",26.52,"",6.2,""
03/11/14,14:00:00,"",14.96,"",41,"",26.29,"",6.2,""
"message 3"
"message 4"**

我一直在使用下面的代码导入 *csv文件,处理双标题,取出空列,然后删除包含错误数据的违规行:

DF = pd.read_csv(BADFILE,parse_dates={'Datetime_(ascii)': [0,1]}, sep=",", \
             header=[10,11],na_values=['','na', 'nan nan'], \
             skiprows=[10], encoding='cp1252')

DF = DF.dropna(how="all", axis=1)
DF = DF.dropna(thresh=2)
droplist = ['message', 'Random']
DF = DF[~DF['Datetime_(ascii)'].str.contains('|'.join(droplist))]

DF.head()

Datetime_(ascii)    (Temp, øC)  (SpCond, mS/cm) (Sal, ppt)  (IBatt, Volts)
0   03/11/14 09:00:00   15.85   1.408   0.74    6.2
1   03/11/14 10:00:00   15.99   1.960   1.05    6.3
2   03/11/14 11:00:00   14.20   40.800  26.12   6.2
3   03/11/14 12:00:01   14.20   41.700  26.77   6.2
4   03/11/14 13:00:00   14.50   41.300  26.52   6.2

这是工作的罚款和dandy直到我有一个文件有一个错误的1行行后的标题:“这里随机留言031114 073721到031114 083200”
我收到的错误是:

*C:\Users\USER\AppData\Local\Continuum\Anaconda3\lib\site-
    packages\pandas\io\parsers.py in _do_date_conversions(self, names, data)
   1554             data, names = _process_date_conversion(
   1555                 data, self._date_conv, self.parse_dates, self.index_col,
    -> 1556                 self.index_names, names, 
    keep_date_col=self.keep_date_col)
   1557 
   1558         return names, data
    C:\Users\USER\AppData\Local\Continuum\Anaconda3\lib\site-
    packages\pandas\io\parsers.py in _process_date_conversion(data_dict, 
    converter, parse_spec, index_col, index_names, columns, keep_date_col)
   2975     if not keep_date_col:
   2976         for c in list(date_cols):
    -> 2977             data_dict.pop(c)
   2978             new_cols.remove(c)
   2979 
   KeyError: ('Time', 'HHMMSS')*

如果我删除了这一行,代码就可以正常工作。同样,如果我删除了**header=行,代码也可以正常工作。但是,我希望能够保留这一行,因为我正在阅读数百个这样的文件。
困难:我不希望在调用
panda.read_csv()**之前打开每个文件,因为这些文件可能相当大--因此我不希望多次读取和保存!此外,我更希望使用真实的的panda/pythonic解决方案,不需要首先将文件作为stringIO缓冲区打开,以删除违规行。

icnyk63a

icnyk63a1#

这里有一种方法,利用skip_rows接受一个可调用函数的事实,该函数只接收所考虑的行索引,这是该参数的一个内置限制。
这样,可调用函数skip_test()首先检查当前索引是否在要跳过的已知索引集中。如果不是,则它打开实际文件并检查相应行以查看其内容是否匹配。
skip_test()函数在检查实际文件方面有点笨拙,尽管它只检查到它正在计算的当前行索引。它还假设坏行总是以相同的字符串开始(在示例中为"foo"),但这似乎是给定OP的一个安全假设。

# example data
""" foo.csv
uid,a,b,c
0,1,2,3
skip me
1,11,22,33
foo
2,111,222,333 
"""

import pandas as pd

def skip_test(r, fn, fail_on, known):
    if r in known: # we know we always want to skip these
        return True
    # check if row index matches problem line in file
    # for efficiency, quit after we pass row index in file
    f = open(fn, "r")
    data = f.read()
    for i, line in enumerate(data.splitlines()):
        if (i == r) & line.startswith(fail_on):
            return True
        elif i > r:
            break
    return False

fname = "foo.csv"
fail_str = "foo"
known_skip = [2]
pd.read_csv(fname, sep=",", header=0, 
            skiprows=lambda x: skip_test(x, fname, fail_str, known_skip))
# output
   uid    a    b    c
0    0    1    2    3
1    1   11   22   33
2    2  111  222  333

如果您确切地知道随机消息出现时将出现在哪一行,那么这将快得多,因为您可以告诉它不要检查文件内容中任何超过潜在违规行的索引。

sczxawaw

sczxawaw2#

经过昨天的一些修补,我找到了一个解决方案,以及潜在的问题可能是什么。
我尝试了上面的**skip_test()**函数答案,但我仍然得到关于表大小的错误:

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.read (pandas\_libs\parsers.c:10862)()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory (pandas\_libs\parsers.c:11138)()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows (pandas\_libs\parsers.c:11884)()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows (pandas\_libs\parsers.c:11755)()

pandas\_libs\parsers.pyx in pandas._libs.parsers.raise_parser_error (pandas\_libs\parsers.c:28765)()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 14, saw 11

所以在尝试了skiprows=之后,我发现我在使用engine='c'时没有得到我想要的行为。read_csv()仍然根据前几行来确定文件的大小,并且仍然传递了一些单列行。这可能是因为我在csv集中还有一些我没有计划好的坏的单列行。
相反,我创建了一个任意大小的DataFrame作为模板。我拉入整个
.csv
文件,然后使用逻辑剥离
NaN
行。
例如,我知道我将遇到的最大数据表将有10行长。因此,我对panda的调用是:

DF = pd.read_csv(csv_file, sep=',', \
     parse_dates={'Datetime_(ascii)': [0,1]},\
     na_values=['','na', '999999', '#'], engine='c',\ 
     encoding='cp1252', names = list(range(0,10)))

然后,我使用这两行代码从DataFrame中删除NaN行和列:

#drop the null columns created by double deliminators
DF = DF.dropna(how="all", axis=1)
DF = DF.dropna(thresh=2)  # drop if we don't have at least 2 cells with real values
icomxhvb

icomxhvb3#

如果将来有人遇到这个问题,Pandas现在已经实现了on_bad_lines参数,你现在可以通过使用on_bad_lines =“skip”来解决这个问题。

相关问题