Git Product home page Git Product logo

python-parallel-programming-cookbook-cn's Introduction

《Python Parallel Programming Cookbook》翻译计划

在线阅读:http://python-parallel-programmning-cookbook.readthedocs.io/

Read the doc编译状态: Documentation Status CircleCI状态: CircleCI

本书结合Python讨论了线程、进程和异步编程三种模型,是Python并行编程不错的参考书籍。

译者平时主要写 Python,职业是 SRE(Devops)。大学期间给 CSDN 兼职翻译了很多文章(有稿费),给 Importnew 和 伯乐在线也翻译了很多文章(没有稿费)。这本书是业余时间翻译的,所以没办法保证完成时间,但是感觉圣诞节之前差不多能完成。

由于时间仓促、经常加班、心情不好、个人专业深度有限,所以文中出现错误在所难免。鼓励读者自己尝试运行书中的代码,如果发现错误、错别字等欢迎提交 PR,Issue 也行。

本书绝大部分尊重原著,译者忍不住会在书中卖弄自己的小聪明宣扬自己的人生观,这些一般都会加特殊的标注。

如何参与翻译?

Fork本仓库,翻译一小部分内容即可(例如标题),然后向本仓库提交一个PR保持打开状态,之后的翻译工作就可以直接push到这个PR。PR的名字最好是章节号码和标题。

翻译资源 :原书是仓库中的 ./book.pdf 。原书图片抽取到了 ./images。但是原书的图表没办法抽取,请自己截图。

注意:

  • 翻译之前请先看一下打开的PR,避免多个人翻译了同一部分内容
  • 建议一次PR即为一个小节的内容,方便Review
  • 内容使用rst和sphinx组织,如果你不会rst,可以使用Markdown格式或者纯文本,我合并的时候会处理格式

你可以不必关心本书的目录以及内容格式问题,将精力放在翻译内容上,其他的部分我会处理

如何编译本书

  1. 安装requirements.txt
  2. make html

如果内容有误,编译过程中将会以红色提示。

需要注意的问题(!)

  1. 如果使用了特殊字符可能编译 pdf 或者 epub 的过程中会出错(LaTex比较难搞),比如这个commit这个编译就有问题。

python-parallel-programming-cookbook-cn's People

Contributors

alicia1529 avatar bestporter avatar catfishwen avatar cfnjucs avatar chanchancl avatar fowind avatar haozihuang avatar hjlarry avatar iphysresearch avatar kemingy avatar laixintao avatar logicgogh avatar microndgt avatar narcissus7 avatar pengwk avatar reveriel avatar shenrq avatar txfly avatar wh-2099 avatar wking-tao avatar xiangmingzhe0928 avatar zephyrzoom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-parallel-programming-cookbook-cn's Issues

ch2.13 一句话很奇怪不会翻译

We have begun to see a better result in the multithreading case. In particular, we've noted how the threaded execution is half time-consuming if we compare it with the non_threaded one. Let's remember that in real life, we would not use threads as a benchmark. Typically, we would put the threads in a queue, pull them out, and perform other tasks. Having multiple threads that execute the same function although useful in certain cases, is not a common use case for a concurrent program, unless it divides the data in the input.

看不懂加粗加斜的这句话。位置第三次测试的结果解释的地方。

勘误4.6中的代码

第四章 6.使用Asyncio和Futures中

# -*- coding: utf-8 -*-

"""
Asyncio.Futures -  Chapter 4 Asynchronous Programming
"""
import asyncio
import sys

@asyncio.coroutine
def first_coroutine(future, N):
    """前n个数的和"""
    count = 0
    for i in range(1, N + 1):
        count = count + i
    yield from asyncio.sleep(4)
    future.set_result("first coroutine (sum of N integers) result = " + str(count))

@asyncio.coroutine
def second_coroutine(future, N):
    count = 1
    for i in range(2, N + 1):
        count *= i
    yield from asyncio.sleep(3)
    future.set_result("second coroutine (factorial) result = " + str(count))

def got_result(future):
   print(future.result())

if __name__ == "__main__":
   N1 = int(sys.argv[1])
   N2 = int(sys.argv[2])
   loop = asyncio.get_event_loop()
   future1 = asyncio.Future()
   future2 = asyncio.Future()
   tasks = [
       first_coroutine(future1, N1),
       second_coroutine(future2, N2)]
   future1.add_done_callback(got_result)
   future2.add_done_callback(got_result)
   loop.run_until_complete(asyncio.wait(tasks))
   loop.close()

该代码的输出为:

$ python asy.py 1 1                                                                      
second coroutine (factorial) result = 1
first coroutine (sum of N integers) result = 1
$ python asy.py 2 2                                                                      
second coroutine (factorial) result = 2
first coroutine (sum of N integers) result = 3
$ python asy.py 3 3                                                                      
second coroutine (factorial) result = 6
first coroutine (sum of N integers) result = 6
$python asy.py 4 4                                                                      
second coroutine (factorial) result = 24
first coroutine (sum of N integers) result = 10

而不是原文中的:

$ python asy.py 1 1
first coroutine (sum of N integers) result = 1
second coroutine (factorial) result = 1
$ python asy.py 2 2
first coroutine (sum of N integers) result = 3
second coroutine (factorial) result = 2
$ python asy.py 3 3
first coroutine (sum of N integers) result = 6
second coroutine (factorial) result = 6
$ python asy.py 4 4
first coroutine (sum of N integers) result = 10
second coroutine (factorial) result = 24

我查看了原文,估计是作者写反了两个sleep的数字,可以考虑改为:

# -*- coding: utf-8 -*-

"""
Asyncio.Futures -  Chapter 4 Asynchronous Programming
"""
import asyncio
import sys

@asyncio.coroutine
def first_coroutine(future, N):
    """前n个数的和"""
    count = 0
    for i in range(1, N + 1):
        count = count + i
    yield from asyncio.sleep(3)
    future.set_result("first coroutine (sum of N integers) result = " + str(count))

@asyncio.coroutine
def second_coroutine(future, N):
    count = 1
    for i in range(2, N + 1):
        count *= i
    yield from asyncio.sleep(4)
    future.set_result("second coroutine (factorial) result = " + str(count))

def got_result(future):
   print(future.result())

if __name__ == "__main__":
   N1 = int(sys.argv[1])
   N2 = int(sys.argv[2])
   loop = asyncio.get_event_loop()
   future1 = asyncio.Future()
   future2 = asyncio.Future()
   tasks = [
       first_coroutine(future1, N1),
       second_coroutine(future2, N2)]
   future1.add_done_callback(got_result)
   future2.add_done_callback(got_result)
   loop.run_until_complete(asyncio.wait(tasks))
   loop.close()

对于新版Python3,也可写成:

# -*- coding: utf-8 -*-

"""
Asyncio.Futures -  Chapter 4 Asynchronous Programming
"""
import asyncio
import sys


async def first_coroutine(future, N):
    """前n个数的和"""
    count = 0
    for i in range(1, N + 1):
        count = count + i
    await asyncio.sleep(3)
    future.set_result("first coroutine (sum of N integers) result = " + str(count))


async def second_coroutine(future, N):
    count = 1
    for i in range(2, N + 1):
        count *= i
    await asyncio.sleep(4)
    future.set_result("second coroutine (factorial) result = " + str(count))

def got_result(future):
   print(future.result())

if __name__ == "__main__":
   N1 = int(sys.argv[1])
   N2 = int(sys.argv[2])
   loop = asyncio.get_event_loop()
   future1 = asyncio.Future()
   future2 = asyncio.Future()
   tasks = [
       first_coroutine(future1, N1),
       second_coroutine(future2, N2)]
   future1.add_done_callback(got_result)
   future2.add_done_callback(got_result)
   loop.run_until_complete(asyncio.wait(tasks))
   loop.close()

测试环境:
Python 3.7.4
Microsoft Windows 家庭版[版本 10.0.18362.592]

Typo-chapter4-03

Hi ~

chapter4 03_Event_loop_management_with_Asyncio 错别字?

所有的时间都在 while 循环中捕捉,然后经过事件处理者处理

所有的时间===>所有的 事件

ref:typo url

ch2.10 clear event是否准确

总觉的set event之后马上clear很神经病,不明白为什么这个设计,如果有多个线程是否能合适工作,是否应该是消费者收到事件之后clear

chapter3.5 如何杀掉一个进程

文中说到terminate会立即杀死进程,但是输出如下下方的图,在p.terminate()之后,输出进程p.is_alive()返回了True,也就是进程还活着,但是在p.join()之后,再次输出p进程的状态却死了,经过自己尝试,terminate确实会立即去杀死进程,但是在主进程执行到p.is_alive()的时候p尚未被杀死所以导致了返回了True。看文章的时候我以为是和后面的join有关,后面尝试在terminate之后执行time.sleep(),发现与join并无关联。具体代码如下:

import multiprocessing
from multiprocessing.dummy import Process
import time

def foo():
    print("Starting func")
    time.sleep(0.1)

if __name__ == "__main__":
    p  = multiprocessing.Process(target=foo)
    print("Process before excution:", p, p.is_alive())
    p.start()
    print("Process running :", p.is_alive())
    p.terminate()
    time.sleep(0.1) # 如果没有这一行,下面一行的代码可能会返回True
    print("Process terminated:" , p.is_alive()) # process is alive
    p.join()
    print("Process joined:", p.is_alive()) # process is dead

ch3.7 队列部分代码运行结果不同

使用队列交换对象,会先输出the queue is empty,然后跳出Consumer。

Python版本:3.6.7
运行结果:

Process Producer : item 160 appended to queue Producer-13
the queue is empty
The size of queue is 1
Process Producer : item 8 appended to queue Producer-13
The size of queue is 2
Process Producer : item 219 appended to queue Producer-13
The size of queue is 3
Process Producer : item 211 appended to queue Producer-13
The size of queue is 4
Process Producer : item 25 appended to queue Producer-13
The size of queue is 5

SIMD与MIMD的区别

为什么说SIMD也有多个处理器?

A SIMD computer consists of n identical processors, each with its own local memory, where
it is possible to store data. All processors work under the control of a single instruction stream; in addition to this, there are n data streams, one for each processor. The processors work simultaneously on each step and execute the same instruction, but on different data elements. This is an example of data-level parallelism. The SIMD architectures are much more versatile than MISD architectures. Numerous problems covering a wide range of applications can be solved by parallel algorithms on SIMD computers. Another interesting feature is that the algorithms for these computers are relatively easy to design, analyze, and implement. The limit is that only the problems that can be divided into a number of subproblems (which are
all identical, each of which will then be solved contemporaneously, through the same set of instructions) can be addressed with the SIMD computer. With the supercomputer developed according to this paradigm, we must mention the Connection Machine (1985 Thinking Machine) and MPP (NASA - 1983). As we will see in Chapter 6, GPU Programming with Python, the advent of modern graphics processor unit (GPU), built with many SIMD embedded units has lead to a more widespread use of this computational paradigm.

6.4.阿姆德尔定律 公式有问题

该节中阿姆德尔定律wiki上译为阿姆达尔定律,翻译中该定律公式太简单了,并且译中:例如一个程序90%的代码都是并行的,但仍存在10%的串行代码,那么系统中即使由无限个处理器能达到的最大加速比仍为9

计算错误,不应该是9,应该是10,套用阿姆达尔定律:s=1/[1/10+(9/10)/p];p趋于∞,s=10.

第三章第7小节管道实现进程通信部分

函数create_items、multiply_items部分无注释,看了好久才明白。
第一个费解的地方pipe_1 = multiprocessing.Pipe(True),不知道这个Pipe具体干了什么的,查看源码后发现就是弄了两个队列,然后建立连接,至于至于send,就是使用的Queue的put函数,recv就是使用的Queue的get函数,用两个队列实现双工。

# 源代码位于multiprocessing/dummy/connection.py
def Pipe(duplex=True):
    a, b = Queue(), Queue()
    return Connection(a, b), Connection(b, a)

class Connection(object):

    def __init__(self, _in, _out):
        self._out = _out
        self._in = _in
        self.send = self.send_bytes = _out.put
        self.recv = self.recv_bytes = _in.get

    def poll(self, timeout=0.0):
        if self._in.qsize() > 0:
            return True
        if timeout <= 0.0:
            return False
        with self._in.not_empty:
            self._in.not_empty.wait(timeout)
        return self._in.qsize() > 0

    def close(self):
        pass

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_value, exc_tb):
        self.close()

自己写了一下文中的实现,p1_a与p1_b是一组能够通信的,p2_a与p2_b是一组互相能通信的

import multiprocessing

def create_items(pipe):
    p1_a, p1_b = pipe
    for item in range(10):
        p1_a.send(item)
    p1_a.close()

def multiply_items(pipe_1, pipe_2):
    p1_a, p1_b = pipe_1
    p1_a.close()
    p2_a, p2_b = pipe_2
    try:
        while True:
            item = p1_b.recv()
            p2_a.send(item * item)
    except EOFError:
        p2_a.close()

if __name__== '__main__':
    # 第一个进程管道发出数字
    pipe_1 = multiprocessing.Pipe(True)
    process_pipe_1 = multiprocessing.Process(target=create_items, args=(pipe_1,))
    process_pipe_1.start()
    # 第二个进程管道接收数字并计算
    pipe_2 = multiprocessing.Pipe(True)
    process_pipe_2 = multiprocessing.Process(target=multiply_items, args=(pipe_1, pipe_2,))
    process_pipe_2.start()
    pipe_1[0].close()
    pipe_2[0].close()
    try:
        while True:
            print(pipe_2[1].recv())
    except EOFError:
        print("End")

关于第二章第三节最后部分讨论

首先感谢大佬的工作,谢谢!
然后关于join方法,我的理解是join阻塞调用子线程的主线程,子线程执行结束,主线程才会结束,而不是继续执行,所以运行的程序并不一定是01234,不知道这样的理解是否正确。

关于第四章第2节书中程序的疑问

在第二小节如何做中,书中给了一段程序:

import concurrent.futures
import time
number_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

def evaluate_item(x):
    # 计算总和,这里只是为了消耗时间
    result_item = count(x)
    # 打印输入和输出结果
    print ("item " + str(x) + " result " + str(result_item))

def  count(number) :
    for i in range(0, 10000000):
        i=i+1
    return i * number

if __name__ == "__main__":
    # 顺序执行
    start_time = time.clock()
    for item in number_list:
        evaluate_item(item)
    print("Sequential execution in " + str(time.clock() - start_time), "seconds")
    # 线程池执行
    start_time_1 = time.clock()
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        for item in number_list:
            executor.submit(evaluate_item,  item)
    print ("Thread pool execution in " + str(time.clock() - start_time_1), "seconds")
    # 进程池
    start_time_2 = time.clock()
    with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor:
        for item in number_list:
            executor.submit(evaluate_item,  item)
    print ("Process pool execution in " + str(time.clock() - start_time_2), "seconds")

首先time.clock() 在UNIX系统上,它返回的应该是"进程时间",它是用秒表示的浮点数。对于第一个顺序执行和第二个多线程执行,应该是准确的,因为都在当前进程执行,统计时间也是当前进程执行的时间。但是对于第三个多进程执行,当前进程只起到调度作用,执行时间分布到了其他进程里,因此我认为统计的时间是有问题的。按照常理也不可能顺序执行时间是6秒,多进程就0.03秒,这个提升了近200倍。

其次,executor.submit应该是排定任务,但是没有具体执行,会返回一个Future,但是并不是立即执行(需要看pool中是否有可用线程或者进程),所以我认为书中给出的测试程序是存在问题的。按照模块文档中给出的例子,应该是这样:

import concurrent.futures
import time
number_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

def evaluate_item(x):
    # 计算总和,这里只是为了消耗时间
    result_item = count(x)
    # 打印输入和输出结果
    return result_item

def  count(number) :
    for i in range(0, 10000000):
        i=i+1
    return i * number

if __name__ == "__main__":
    # 顺序执行
    start_time = time.time()
    for item in number_list:
        print(evaluate_item(item))
    print("Sequential execution in " + str(time.time() - start_time), "seconds")
    # 线程池执行
    start_time_1 = time.time()
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        futures = [executor.submit(evaluate_item, item) for item in number_list]
        for future in concurrent.futures.as_completed(futures):
            print(future.result())
    print ("Thread pool execution in " + str(time.time() - start_time_1), "seconds")
    # 进程池
    start_time_2 = time.time()
    with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor:
        futures = [executor.submit(evaluate_item, item) for item in number_list]
        for future in concurrent.futures.as_completed(futures):
            print(future.result())
    print ("Process pool execution in " + str(time.time() - start_time_2), "seconds")

使用as_completed函数,可以保证等待所有Future对象运行完成,这时候统计的时间应该才是准确的。我的电脑CPU: Intel 酷睿i5 5257U,顺序执行和多线程在6.3秒,多进程在3.7秒。

P.S.

  1. concurrent.futures模块文档
  2. concurrent.futures翻译

信号量的死锁情况,不会翻译

https://github.com/laixintao/python-parallel-programming-cookbook-cn/blob/master/chapter2/08_Thread_synchronization_with_semaphores.rst

There's more...

A particular use of semaphores is the mutex. A mutex is nothing but a semaphore with an internal variable initialized to the value 1, which allows the realization of mutual exclusion in access to data and resources.

Semaphores are still commonly used in programming languages that are multithreaded; however, using them you can run into situations of deadlock. For example, there is a deadlock situation created when the thread t1 executes a wait on the semaphore s1, while the t2 thread executes a wait on the semaphore s1, and then t1, and then executes a wait on s2 and t2, and then executes a wait on s1.

第二章 12小节

如果可以使用这些原语的话,应该优先考虑使用这些,而不是使用queue(队列)模块。队列操作起来更容易,也使多线程编程更安全
这句话逻辑是不是有点问题

to install mpi4py please install openmpi first

i saw your comment said you failed to install mpi4py, maybe you don't have mpi on you mac, so you can install openmpi first then try to install mpi

brew install openmpi

pip install mpi4py

总进度

第一章 认识并行计算和Python 100%

  • 1. 介绍
  • 2. 并行计算的内存架构
  • 3. 内存管理
  • 4. 并行编程模型
  • 5. 如何设计一个并行程序
  • 6. 如何评估并行程序的性能
  • 7. 介绍Python
  • 8. 并行世界的Python
  • 9. 介绍线程和进程
  • 10. 开始在Python中使用进程
  • 11. 开始在Python中使用线程

第二章 基于线程的并行 100%

  • 1. 介绍
  • 2. 使用Python的线程模块
  • 3. 如何定义一个线程
  • 4. 如何确定当前的线程
  • 5. 如何在一个子类中使用线程
  • 6. 使用Lock和RLock进行线程同步
  • 7. 使用RLock进行线程同步
  • 8. 使用信号进行线程同步
  • 9. 使用信号进行线程同步
  • 10. 使用事件进行线程同步
  • 11. 使用with语法
  • 12. 使用 queueu 进行线程通信
  • 13. 评估多线程应用的性能

第三章 基于进程的并行 100%

  • 1. 介绍
  • 2. 如何创建一个进程
  • 3. 如何为一个进程命名
  • 4. 如何在后台运行一个进程
  • 5. 如何杀掉一个进程
  • 6. 如何在子类中使用进程
  • 7. 如何在进程之间交换对象
  • 8. 进程如何同步
  • 9. 如何在进程之间管理状态
  • 10. 如何使用进程池
  • 11. 使用Python的mpipy模块
  • 12. 点对点通讯
  • 13. 死锁问题
  • 14. 使用broadcast通讯
  • 15. 使用scatter通讯
  • 16. 使用gather通讯
  • 17. 使用Alltoall通讯
  • 18. 简化操作
  • 19. 如何优化通讯

第四章 异步编程 100%

  • 1. 介绍
  • 2. 使用Python的 concurrent.futures 模块
  • 3. 使用Asyncio管理事件循环
  • 4. 使用Asyncio管理协程
  • 5. 使用Asyncio控制任务
  • 6. 使用Asyncio和Futures

第五章 分布式Python编程

  • 1. 介绍
  • 2. 使用Celery实现分布式任务
  • 3. 如何使用Celery创建任务
  • 4. 使用SCOOP进行科学计算
  • 5. 使用SCOOP处理map函数
  • 6. 使用Pyro4进行远程方法调用
  • 7. 使用Pyro4清理对象
  • 8. 使用Pyro4部署客户端-服务器应用
  • 9. 使用PyCSP交流顺序的进程
  • 10. 使用Disco进行MapReduce
  • 11. 使用RPyC远程调用

第六章 Python GPU编程

  • 1. 介绍
  • 2. 使用PyCUDA模块
  • 3. 如何创建一个PyCUDA应用
  • 4. 理解PyCuDA内存模型
  • 5. 使用GPUArray进行内核调用
  • 6. 使用PyCUDA评估元素
  • 7. 使用PyCUDA进行MapReduce操作
  • 8. 使用NumbaPro进行GPU编程
  • 9. 使用GPU加速的库
  • 10. 使用PyOpenCL模块
  • 11. 如何创建一个PyOpenCL应用
  • 12. 使用PyOpenCL评估元素
  • 13. 使用PyOpenCL测试你的GPU应用

task_done() 在做什么?

P109

A queue has the JoinaleQueue subclass. It has the following two additional methods:
f task_done(): This indicates that a task is complete, for example, after the get() method is used to fetch items from the queue. So, it must be used only by queue consumers.

这里的介绍和官网文档一样,不是很懂。

非常赞!请问前端是拿什么实现的呢?

非常赞!请问前端是拿什么实现的呢?
我在看你的并行处理文章,我也有在写NLP相关的文章,想以个人网站发布出来,但是发现对公式的支持不太漂亮,所以想问一下。

BTW,shopee 现在有NLP 的岗位可以内推吗?

ch

abs(a) # sqrt(a.real2 + a.imag2)
5.0
这个5.0不对吧

疑问: Chapter2.关于线程没加锁导致的条件竞争

遇到的情况:

Run了书上Chapter2的Thread synchronization with lock and RLock的第一个例子

发现和书上的结果不一样.

预期是如果两个thread没加lock,去对shared memory操作时,会出现条件竞争.

代码:

# -*- coding:utf-8 -*-

# @Time    : 2018/12/15 4:44 PM
# @Author  : jerry

import threading

shared_resource_with_lock = 0
shared_resource_with_no_lock = 0
COUNT = 100000
shared_resource_lock = threading.Lock()


#### NO LOCK MANAGEMENT
def increment_without_lock():
    global shared_resource_with_no_lock
    for i in range(COUNT):
        shared_resource_with_no_lock += 1


def decrement_without_lock():
    global shared_resource_with_no_lock
    for i in range(COUNT):
        shared_resource_with_no_lock -= 1


if __name__ == '__main__':
    t3 = threading.Thread(target=increment_without_lock)
    t4 = threading.Thread(target=decrement_without_lock)

    t3.start()
    t4.start()

    t3.join()
    t4.join()

    print("the value of shared variable with race condition is %s" % shared_resource_with_no_lock)

运行结果:

在我的电脑上运行,当COUNT为100000时,运行几次后,结果都是一样的。
the value of shared variable with race condition is 0.

把COUNT变大,设置为1000000 才能出现预期的结果.
the value of shared variable with race condition is 1382.

我的看法:

 t3.start() #  t3 开始执行
 t4.start() #  t4 开始执行

 t3.join() # 确保t3 结束
 t4.join() # 确保t4 结束

t3t4不是同时start
当Count比较小等于的时候,t3在 t4.start()之前已经结束了,把全局的shared_resource_with_no_lock加到100000了,

这个时候 t4.start()了,把全局的shared_resource_with_no_lock减回到0了,

最终输出结果是 the value of shared variable with race condition is 0.
此时并没有出现预期的条件竞争.

只有当Count足够大,以至于,t3在 t4.start()之前,这个很短的时间内不能run完,才会出现预期的条件竞争,导致最终的输出结果不为0

看来这个跟运行机器的性能有关系呀~

不知道我的理解是否正确? please help~

ch 1.7最后定义类

class Complex:
... def init(self, realpart, imagpart):
... self.r = realpart
... self.i = imagpart
这里的def需要缩进

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.