Python boto,列出bucket中特定目录的内容

vktxenjb  于 2023-01-04  发布在  Python
关注(0)|答案(8)|浏览(140)

我只能对S3存储桶中的特定目录进行S3访问。
例如,使用s3cmd命令,如果我尝试列出整个bucket:

$ s3cmd ls s3://bucket-name

我得到一个错误:Access to bucket 'my-bucket-url' was denied
但如果我尝试访问存储桶中的特定目录,我可以看到内容:

$ s3cmd ls s3://bucket-name/dir-in-bucket

现在我想用python boto连接到S3 bucket。类似于:

bucket = conn.get_bucket('bucket-name')

我得到一个错误:boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
但如果我尝试:

bucket = conn.get_bucket('bucket-name/dir-in-bucket')

脚本停顿了大约10秒,然后打印出一个错误。Bellow是完整的跟踪。你知道如何继续吗?
注意问题是关于boto版本2的模块,而不是boto3。

Traceback (most recent call last):
  File "test_s3.py", line 7, in <module>
    bucket = conn.get_bucket('bucket-name/dir-name')
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 471, in get_bucket
    return self.head_bucket(bucket_name, headers=headers)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 490, in head_bucket
    response = self.make_request('HEAD', bucket_name, headers=headers)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 633, in make_request
    retry_handler=retry_handler
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1046, in make_request
    retry_handler=retry_handler)
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 922, in _mexe
    request.body, request.headers)
  File "/usr/lib/python2.7/httplib.py", line 958, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python2.7/httplib.py", line 992, in _send_request
    self.endheaders(body)
  File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
    self.send(msg)
  File "/usr/lib/python2.7/httplib.py", line 776, in send
    self.connect()
  File "/usr/lib/python2.7/httplib.py", line 1157, in connect
    self.timeout, self.source_address)
  File "/usr/lib/python2.7/socket.py", line 553, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
socket.gaierror: [Errno -2] Name or service not known
s5a0g9ez

s5a0g9ez1#

对于boto3

import boto3

s3 = boto3.resource('s3')
my_bucket = s3.Bucket('my_bucket_name')

for object_summary in my_bucket.objects.filter(Prefix="dir_name/"):
    print(object_summary.key)
fnx2tebb

fnx2tebb2#

默认情况下,当你在boto中执行get_bucket调用时,它会尝试通过对bucket URL执行HEAD请求来验证你是否真的拥有对该bucket的访问权限。在这种情况下,你不希望boto这样做,因为你没有访问bucket本身的权限。

bucket = conn.get_bucket('my-bucket-url', validate=False)

然后你就可以像这样列出对象:

for key in bucket.list(prefix='dir-in-bucket'): 
    <do something>

如果仍然出现403错误,请尝试在前缀末尾添加一个斜杠。

for key in bucket.list(prefix='dir-in-bucket/'): 
    <do something>

:此答案是针对boto version 2模块编写的,该模块现已过时,目前(2020年)boto3是与AWS配合使用的标准模块,更多信息请参见此问题:What is the difference between the AWS boto and boto3

llmtgqce

llmtgqce3#

Boto3客户端:

import boto3

_BUCKET_NAME = 'mybucket'
_PREFIX = 'subfolder/'

client = boto3.client('s3', aws_access_key_id=ACCESS_KEY,
                            aws_secret_access_key=SECRET_KEY)

def ListFiles(client):
    """List files in specific S3 URL"""
    response = client.list_objects(Bucket=_BUCKET_NAME, Prefix=_PREFIX)
    for content in response.get('Contents', []):
        yield content.get('Key')

file_list = ListFiles(client)
for file in file_list:
    print 'File found: %s' % file

使用会话

from boto3.session import Session

_BUCKET_NAME = 'mybucket'
_PREFIX = 'subfolder/'

session = Session(aws_access_key_id=ACCESS_KEY,
                  aws_secret_access_key=SECRET_KEY)

client = session.client('s3')

def ListFilesV1(client, bucket, prefix=''):
    """List files in specific S3 URL"""
    paginator = client.get_paginator('list_objects')
    for result in paginator.paginate(Bucket=bucket, Prefix=prefix,
                                     Delimiter='/'):
        for content in result.get('Contents', []):
            yield content.get('Key')

file_list = ListFilesV1(client, _BUCKET_NAME, prefix=_PREFIX)
for file in file_list:
    print 'File found: %s' % file
f45qwnt8

f45qwnt84#

我刚刚遇到了同样的问题,这段代码可以解决这个问题。

import boto3

s3 = boto3.resource("s3")
s3_bucket = s3.Bucket("bucket-name")
dir = "dir-in-bucket"
files_in_s3 = [f.key.split(dir + "/")[1] for f in 
s3_bucket.objects.filter(Prefix=dir).all()]
lo8azlld

lo8azlld5#

以下代码将列出S3存储桶中特定目录下的所有文件:

import boto3

s3 = boto3.client('s3')

def get_all_s3_keys(s3_path):
    """
    Get a list of all keys in an S3 bucket.

    :param s3_path: Path of S3 dir.
    """
    keys = []

    if not s3_path.startswith('s3://'):
        s3_path = 's3://' + s3_path

    bucket = s3_path.split('//')[1].split('/')[0]
    prefix = '/'.join(s3_path.split('//')[1].split('/')[1:])

    kwargs = {'Bucket': bucket, 'Prefix': prefix}
    while True:
        resp = s3.list_objects_v2(**kwargs)
        for obj in resp['Contents']:
            keys.append(obj['Key'])

        try:
            kwargs['ContinuationToken'] = resp['NextContinuationToken']
        except KeyError:
            break

    return keys
hsvhsicv

hsvhsicv6#

这可以通过以下方式完成:

s3_client = boto3.client('s3')
objects = s3_client.list_objects_v2(Bucket='bucket_name')
for obj in objects['Contents']:
  print(obj['Key'])
xj3cbfub

xj3cbfub7#

如果要列出存储桶中某个文件夹的所有对象,可以在列出时指定。

import boto
conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(AWS_BUCKET_NAME)
for file in bucket.list("FOLDER_NAME/", "/"):
    <do something with required file>
oxosxuxt

oxosxuxt8#

在S3中列出具有特定前缀的对象的最简单方法是使用awswrangler

import awswrangler as wr
wr.s3.list_objects("s3://bucket_name/some/prefix/")

这将返回存储在some/prefix/中的对象列表

相关问题