Python和asyncio:封闭的命名管道始终可供读取
I am working on a script that reads data through a named pipe from another piece of software. I would like to read data only when available, and I was trying to use add_reader
from asyncio
.
我正在编写一个脚本,通过另一个软件的命名管道读取数据。我想只在可用时读取数据,我试图使用asyncio的add_reader。
I noticed that, on Linux, the reader I registered is called continuously after the pipe is closed. On macOS, this doesn't happen.
我注意到,在Linux上,我注册的阅读器在管道关闭后连续调用。在macOS上,这不会发生。
This puzzles me, because after the writing end of the pipe has hung up, I would not expect the reading end to be available for reading, especially because clearly there can be no data.
这让我感到很困惑,因为在管道写入结束后,我不希望读取端可以读取,特别是因为显然没有数据。
This script illustrates the problem:
此脚本说明了此问题:
#!/usr/bin/env python3
import os, asyncio, threading, time
NAMED_PIPE = 'write.pipe'
# Setup the named pipe
if os.path.exists(NAMED_PIPE):
os.unlink(NAMED_PIPE)
os.mkfifo(NAMED_PIPE)
loop = asyncio.get_event_loop()
def simulate_write():
# Open the pipe for writing and write something into it.
# This simulates another process
print('waiting for opening pipe for writing')
with open(NAMED_PIPE, 'w') as write_stream:
print('writing pipe opened')
time.sleep(1)
print('writing some data')
print('<some data>', file=write_stream)
time.sleep(1)
print('exiting simulated write')
async def open_pipe_for_reading():
print('waiting for opening pipe for reading')
# This needs to run asynchronously because open will
# not reuturn until on the other end, someone tries
# to write
return open(NAMED_PIPE)
count = 0
def read_data_block(fd):
global count
count += 1
print('reading data', fd.read())
if count > 10:
print('reached maximum number of calls')
loop.remove_reader(fd.fileno())
# Spawn a thread that will simulate writing
threading.Thread(target=simulate_write).start()
# Get the result of open_pipe_for_reading
stream = loop.run_until_complete(open_pipe_for_reading())
print('reading pipe opened')
# Schedule the reader
loop.add_reader(stream.fileno(), read_data_block, stream)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
print('closing stream')
stream.close()
print('removing pipe')
os.unlink(NAMED_PIPE)
On OSX, this is the behavior I observe:
在OSX上,这是我观察到的行为:
waiting for opening pipe for writing
waiting for opening pipe for reading
reading pipe opened
writing pipe opened
writing some data
exiting simulated write
reading data <some data>
^Cclosing stream
removing pipe
While on Linux:
在Linux上:
waiting for opening pipe for writing
waiting for opening pipe for reading
reading pipe opened
writing pipe opened
writing some data
exiting simulated write
reading data <some data>
reading data
reading data
reading data
reading data
reading data
reading data
reading data
reading data
reading data
reading data
reached maximum number of calls
^C<closing stream
removing pipe
So, why is a closed pipe available for reading although it has no data?
那么,为什么闭合管道可用于读取,尽管它没有数据?
Also, in my understanding, add_reader
would trigger when the stream can be read from and there is some data to read; is this interpretation correct?
另外,根据我的理解,当可以读取流并且有一些数据需要读取时,add_reader会触发;这种解释是否正确?
Python and OS versions:
Python和OS版本:
- Python 3.6.4 (MacPorts), macOS High Sierra 10.13.3 (17D102)
- Python 3.6.1 (manually compiled) CentOS Linux release 7.4.1708 (Core)
- Python 3.5.2 (from repo) Linux Mint 18.2 Sonya
Python 3.6.4(MacPorts),macOS High Sierra 10.13.3(17D102)
Python 3.6.1(手动编译)CentOS Linux 7.4.1708版(核心版)
Python 3.5.2(来自repo)Linux Mint 18.2 Sonya
1 个解决方案
#1
0
In python reading an empty data is a sign for socket/pipe closing.
在python中读取空数据是套接字/管道关闭的标志。
data = fd.read()
if not data:
return
Also please switch the pipe to non-blocking mode:
还请将管道切换到非阻塞模式:
os.set_blocking(fd, False)
更多相关文章
- 保护SD卡Raspberry Pi上的数据
- Linux内核数据结构之链表
- 获取输出为管道的命令的pid
- 管道是否会阻止它的执行中途? [重复]
- REDIS从LINUX文件写入批量数据
- Linux环境下修改MySQL数据库存储引擎
- 如何使用tc和cgroup来预防数据包
- 来点基础的--诡异的极客们的符号--流、管道和文件的耦合
- mysql数据库忘记ROOT密码时的解决办法