您的数据文件已损坏。如果该字符确实是U + 00AD SOFT HYPHEN,那么您缺少一个0xC2字节:
>>> '\u00ad'.encode('utf8')
b'\xc2\xad'
在以0xAD结尾的所有可能的UTF-8编码中,软连字符确实最有意义。但是,它指示 可能 缺少其他字节的数据集。您碰巧碰到了一个重要事件。
我将返回此数据集的源,并验证下载时文件未损坏。否则,error='replace'
如果没有分隔符(制表符,换行符等)缺失,则使用是可行的解决方法。
另一种可能性是SEC实际上对文件使用了 编码。例如,在Windows代码页1252和Latin-1中,0xAD
是软连字符的正确编码。确实,当我直接下载相同的数据集(警告,链接了大的ZIP文件)并打开时tags.txt
,我无法将数据解码为UTF-8:
>>> open('/tmp/2017q1/tag.txt', encoding='utf8').read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 3583587: invalid start byte
>>> from pprint import pprint
>>> f = open('/tmp/2017q1/tag.txt', 'rb')
>>> f.seek(3583550)
3583550
>>> pprint(f.read(100))
(b'1\t1\t\t\t\tSUPPLEMENTAL DISCLOSURE OF NON\xadCASH INVESTING AND FINANCING A'
b'CTIVITIES:\t\nProceedsFromSaleOfIn')
文件中有两个这样的非ASCII字符:
>>> f.seek(0)
0
>>> pprint([l for l in f if any(b > 127 for b in l)])
[b'SupplementalDisclosureOfNoncashInvestingAndFinancingActivitiesAbstract\t0'
b'001654954-17-000551\t1\t1\t\t\t\tSUPPLEMENTAL DISCLOSURE OF NON\xadCASH I'
b'NVESTING AND FINANCING ACTIVITIES:\t\n',
b'HotelKranichhheMember\t0001558370-17-001446\t1\t0\tmember\tD\t\tHotel Krani'
b'chhhe [Member]\tRepresents information pertaining to Hotel Kranichh\xf6h'
b'e.\n']
Hotel Kranichh\xf6he
HotelKranichhöhe 被解码为Latin-1 。
文件中还有几对0xC1 / 0xD1对:
>>> f.seek(0)
0
>>> quotes = [l for l in f if any(b in {0x1C, 0x1D} for b in l)]
>>> quotes[0].split(b'\t')[-1][50:130]
b'Temporary Payroll Tax Cut Continuation Act of 2011 (\x1cTCCA\x1d) recognized during th'
>>> quotes[1].split(b'\t')[-1][50:130]
b'ributory defined benefit pension plan (the \x1cAetna Pension Plan\x1d) to allow certai'
我敢打赌那些实际上是U + 201C左双引号和U + 201D右双引号;注意1C
和1D
部分。似乎他们的编码器似乎采用了UTF-16并去除了所有高字节,而不是正确地编码为UTF-8!
没有与Python中没有编解码器出货将编码'\u201C\u201D'
来b'\x1C\x1D'
,使这一切更可能的是,美国证券交易委员会已经把事情弄糟了编码过程中的某个地方。实际上,还有0x13和0x14字符(可能是en 和 em 破折号)(U + 2013和U + 2014),以及几乎可以肯定是单引号的0x19字节(U + 2019)。完整图片所缺少的只是一个0x18字节来表示U + 2018。
如果我们假设编码已损坏,则可以尝试修复。以下代码将读取文件并解决引号问题,并假设其余数据不使用除引号之外的Latin-1之外的字符:
_map = {
# dashes
0x13: '\u2013', 0x14: '\u2014',
# single quotes
0x18: '\u2018', 0x19: '\u2019',
# double quotes
0x1c: '\u201c', 0x1d: '\u201d',
}
def repair(line, _map=_map):
"""Repair mis-encoded SEC data. Assumes line was decoded as Latin-1"""
return line.translate(_map)
然后将其应用于您阅读的行:
with open(filename, 'r', encoding='latin-1') as f:
repaired = map(repair, f)
fields = next(repaired).strip().split('\t')
for line in repaired:
yield process_tag_record(fields, line)
另外,解决您的已发布代码,使Python变得比预期更难工作。不要用codecs.open()
; 那是已知问题的旧代码,并且比新的Python 3 I / O层慢。只需使用open()
。不要使用f.readlines()
; 您无需在此处将整个文件读入列表。只需直接遍历文件即可:
def tags(filename):
"""Yield Tag instances from tag.txt."""
with open(filename, 'r', encoding='utf-8', errors='strict') as f:
fields = next(f).strip().split('\t')
for line in f:
yield process_tag_record(fields, line)
如果process_tag_record
还在选项卡上拆分,请使用一个csv.reader()
对象,并避免手动拆分每一行:
import csv
def tags(filename):
"""Yield Tag instances from tag.txt."""
with open(filename, 'r', encoding='utf-8', errors='strict') as f:
reader = csv.reader(f, delimiter='\t')
fields = next(reader)
for row in reader:
yield process_tag_record(fields, row)
如果process_tag_record
将fields
列表与其中的值组合在一起row
以形成字典,请csv.DictReader()
改用:
def tags(filename):
"""Yield Tag instances from tag.txt."""
with open(filename, 'r', encoding='utf-8', errors='strict') as f:
reader = csv.DictReader(f, delimiter='\t')
# first row is used as keys for the dictionary, no need to read fields manually.
yield from reader