网络中的Regex字段匹配和替换-Python

iovurdzv  于 2023-10-22  发布在  Python
关注(0)|答案(2)|浏览(146)

我有一个很大的csv(+1000000行),我需要做一个正则表达式搜索和替换功能。简而言之,我需要两列,并找到它们之间的匹配;然后使用匹配的行来替换第三字段中的值。它基本上将网络中的某些组件与其上游组件相匹配。下面是一个简单的例子:
| OID|组件|上游|Field1|
| --|--|--|--|
| 1 |ABC123|| 1 |
| 2 |def456| ABC123| 2 |
| 3 |ghi789| jkl101| 3 |
| 4 |jkl101|| 4 |
这将是预期的结果:
| OID|组件|上游|Field1|
| --|--|--|--|
| 1 |ABC123|| 1 |
| 2 |def456| ABC123| 1 |
| 3 |ghi789| jkl101| 4 |
| 4 |jkl101|| 4 |
正如您所看到的,任何上游值出现在“assembly”字段中的行都将获得与其上游相邻行相等的Field 1值。
我有一个非常好用的,但非常慢(写速度约为15 kb/s)的代码,我目前使用的是python中的正则表达式模块。我的问题是,什么是更有效的方法来做到这一点?Pandas是不可能的,因为ram大小有限,其他数据格式也是如此。在过去,我尝试过dask,但从来没有让它正确工作,可能是因为在我(非常)受限的IT条件下-我无法访问python中的环境路径变量。
代码如下:

import csv
import re

#csv files
input_file = 'L:\\Dev_h\\Device Heirarchy\\fulljoin_device_flow2.csv'
output_file = 'L:\\Dev_h\\Device Heirarchy\\output2.csv'

# output fields
output_fields = ['gs_attached_assembly_guid', 'gs_upstream_aa_guid', 'Field1_num','Dev_no', 'gs_guid', 'gs_display_feature_guid', 'field2', 'gs_network_feature_name', 'gs_assembly_guid', 'gs_display_feature_name', 'Field1', 'gs_network_feature_guid', 'OID_']

with open(input_file, 'r', newline='') as in_csv, open(output_file, 'w', newline='') as out_csv:
    reader = csv.DictReader(in_csv)
    writer = csv.DictWriter(out_csv, fieldnames=output_fields)
    writer.writeheader()

    # Build Regex
    patterns = {row['gs_attached_assembly_guid']: row['Field1_num'] for row in reader}
    pattern = re.compile('|'.join(map(re.escape, patterns.keys())))

    # restart loop
    in_csv.seek(0)
    next(reader) # Skip header row

    #for loop allowing pattern matching
    for row in reader:

        # Step 6: Define a function to search the 'gs_upstream_aa_guid' column using the regex pattern
        def search_and_replace(match):
            matched_guids = match.group().split(',')
            replacement_values = []
            for matched_guid in matched_guids:
                if matched_guid in patterns and patterns[matched_guid] != '':
                    replacement_values.append(patterns[matched_guid])
                else:
                    # Return an empty string instead of the gs_attached_assembly_guid
                    replacement_values.append('')

            return ','.join(replacement_values)

        # check for matches in 'gs_upstream_aa_guid' value
        match = pattern.search(row['gs_upstream_aa_guid'])

        #If there is a match, replace the 'Field1_num' value with the matched value
        if match:
            row['Field1'] = search_and_replace(match)
        #Otherwise skip
        else:
            pass

        #Write the updated row out to the output CSV
        writer.writerow(row)

print("End")

问题是,如何加快这一进程?

ncgqoxb0

ncgqoxb01#

更新

由于内存限制,您不能使用pandas,最简单的方法是在阅读csv的第一遍中构建一个替换字典,然后在第二遍中使用它来替换Field1值。使用您的代码作为起点,针对问题中的示例数据进行修改:

output_fields = ['OID', 'assembly', 'upstream', 'Field1']

with open(input_file, 'r', newline='') as in_csv, open(output_file, 'w', newline='') as out_csv:
    reader = csv.DictReader(in_csv)
    writer = csv.DictWriter(out_csv, fieldnames=output_fields)
    writer.writeheader()
    
    # Build replacements dict
    reps = { row['assembly'] : row['Field1'] for row in reader }
    
    # restart loop
    in_csv.seek(0)
    next(reader) # Skip header row
    
    for row in reader:
        # update if required
        # use dict.get to allow keeping the original value when no replacement
        row['Field1'] = reps.get(row['upstream'], row['Field1'])
        # Write the updated row out to the output CSV
        writer.writerow(row)

示例数据的输出:

OID,assembly,upstream,Field1
1,abc123,,1
2,def456,abc123,1
3,ghi789,jkl101,4
4,jkl101,,4

原始答案

您可以使用pandas,使用merge来匹配upstream值和assembly值,并获取适当的Field1值:

df = pd.read_csv(input_file)
df['Field1'] = (df
    .merge(df, left_on='upstream', right_on='assembly', how='left')['Field1_y']
    .fillna(df['Field1'])
     # necessary because the presence of NaN after the merge changes type to float
    .astype(int)
)
df.to_csv(output_file, index=False)
e0uiprwp

e0uiprwp2#

您可以简单地删除和替换,而不是构建一个 * 大 * 正则表达式

match = pattern.search(row['gs_upstream_aa_guid'])

通过

match = row['gs_upstream_aa_guid'] in patterns

Regexes can be fast,但永远不会像检查字典中是否存在值那样快,因为complexityO(1)

  • O(1)* 意味着检查包含1个值的字典中的值的存在与检查包含1,000,000个值的字典中的值的存在一样快。

相关问题