如何在pandas中读取带有损坏头的csv?

kpbpu008  于 2023-04-03  发布在  其他
关注(0)|答案(1)|浏览(162)

我在Jupyter中使用Pandas打开CSV文件时遇到问题,然后我试图在Visual studio中打开它,但它也不起作用。我错过了什么?
Jupyter中的代码:

path = 'data/DATA_vozila_RAW.csv'
df = pd.read_csv(path)

我还尝试在代码中添加encoding=“Latin-1”。
Visual Studio中的代码:

data = pd.read_csv('DATA_vozila_RAW.csv', encoding="Latin-1", delimiter=",")
print(data)

数据:x1c 0d1x
错误:
ParserError Traceback(most recent call last)Cell In[54],line 1 ----〉1 df =pd.read_csv(path)

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\util\_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
    209     else:
    210         kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\util\_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
    325 if len(args) > num_allow_args:
    326     warnings.warn(
    327         msg.format(arguments=_format_argument_list(allow_args)),
    328         FutureWarning,
    329         stacklevel=find_stack_level(),
    330     )
--> 331 return func(*args, **kwargs)

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\io\parsers\readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
    935 kwds_defaults = _refine_defaults_read(
    936     dialect,
    937     delimiter,
   (...)
    946     defaults={"delimiter": ","},
    947 )
    948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\io\parsers\readers.py:611, in _read(filepath_or_buffer, kwds)
    608     return parser
    610 with parser:
--> 611     return parser.read(nrows)

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\io\parsers\readers.py:1778, in TextFileReader.read(self, nrows)
   1771 nrows = validate_integer("nrows", nrows)
   1772 try:
   1773     # error: "ParserBase" has no attribute "read"
   1774     (
   1775         index,
   1776         columns,
   1777         col_dict,
-> 1778     ) = self._engine.read(  # type: ignore[attr-defined]
   1779         nrows
   1780     )
   1781 except Exception:
   1782     self.close()

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\io\parsers\c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows)
    228 try:
    229     if self.low_memory:
--> 230         chunks = self._reader.read_low_memory(nrows)
    231         # destructive to chunks
    232         data = _concatenate_chunks(chunks)

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\_libs\parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory()

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\_libs\parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows()

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\_libs\parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows()

File ~\Desktop\analitika_podatkov\work_dir\python-analitika-public\.venv\lib\site-packages\pandas\_libs\parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 3, saw 2
htzpubme

htzpubme1#

您的csv文件的标题似乎被打破/分裂成 * 两个单独的行 *。
如果是这样的话,你可以从创建一个头文件名列表开始,并将其传递给read_csv中的names

with open("DATA_vozila_RAW.csv", "r") as csv_file:
    headers = " ".join([line.strip() for line in csv_file.readlines()[:2]]).split(";")
    
data = pd.read_csv("DATA_vozila_RAW.csv", encoding="Latin-1", delimiter=";",
                   skiprows=2, header=None, names=headers).iloc[:, 2:]

输出(in Jupyter):

相关问题