Learn and play with Concurrent and Parallel programming using Java.
Implemented based on LinkedIn learning courses:
Covered topics:
- Thread vs Process
- Executing Scheduling
- Thread Lifecycle
- Mutual Exclusion: Data Race problem
- Nested Lock
- Non-Blocking Lock
- Read-Write Lock
- Multiple Locks: Deadlock problem
- Multiple Locks: Livelock problem
- Exception Handling: Abandoned Lock problem
- Load Balancing: Starved Thread problem
- On the pc under Windows OS open Task Manager and switch to Performance tab.
On CPU pane you can see overall CPU utilization which is about 10% in regular state. - Start app and see how CPU utilization is increased: can be up to 100%.
- Get Process ID value from console output, e.g. 7428.
Now in Task Manager use Open Resource Monitor link to see more details. In Resource Monitor window on CPU tab in Processes pane you can find our Java app by that Process ID. There we can see number of Threads and CPU utilization in percents. Number of Threads is usually greater than created by our program. Others background Threads are created for util functions like garbage collection and runtime compilations. - Stop our app and see overall CPU utilization in Task Manager.
It should be decreased to regular state about 10%.
- Start app
- Check console output.
- There we have 11 rounds of execution.
Each time we start 2 Threads with the same order: Baron first, Olivia next. By this, it is expected that Baron wins each time by chopping more vegetables as it goes first. However, actual results is unpredictable and depends on Threads scheduling by system. For example, I have 7 times when Baron chopped more vegetables, and Olivia wins 4 times.
- Start app
- Check console output.
- There we have all possible states:
NEW
- A thread that has not yet started is in this state.RUNNABLE
- A thread executing in the Java virtual machine is in this state.BLOCKED
- A thread that is blocked waiting for a monitor lock is in this state.WAITING
- A thread that is waiting indefinitely for another thread to perform a particular action is in this state.TIMED_WAITING
- A thread that is waiting for another thread to perform an action for up to a specified waiting time is in this state.TERMINATED
- A thread that has exited is in this state.
- Problem: There we have 2 same Threads that increase counter 10_000_000 times. However, since these Threads use the same shared data class, calculation goes wrong way. Finally, we have some unexpected value like 11_149_076 instead of 20_000_000.
- Solution: We have Thread safe shared data classes implemented based on:
- ReentrantLook
- Synchronized method
- Synchronized block
- Atomic variable
- Nested Look Problem: Using locks we can have situation when Thread blocks himself by acquiring same lock twice (without releasing after first lock). In java we have Reentrant Lock by default which allows specific Thread to acquire same lock several times. To release lock completely, Thread should unlock the same number of times as it was acquired. ReentrantLock class allows you to see number of holds made by Thread.
- Non-Blocking Look: using locks solves problem of data race, but requires you to wait unit lock is released. In case you don't need immediate result and have some other job to do you can use lock try. If lock is free you will take it, otherwise you will skip locked part and do alternative or just next job.
- Let's imagine situation when only one Thread is changing variable and many Threads reading it. With usual approach we lock both reading and writing access. That works fine in terms of synchronization, but makes program slow. Taking into account that most of the time Data is needed for reading only, we can soften the lock. When Data is not blocked by changing, it accessible for reading without blocking, so can be accessible by many Threads at the same time.
- Problem: when several Threads use several shared Locks, it might be situation when Threads blocked by each other and stuck with no progress.
For example, we have Thread1 and Thread2 which both use Lock1 and Lock2. Dead Lock happens with next steps:
- Thread1 acquires Lock1.
- Thread2 acquires Lock2.
- Thread1 tries to acquire Lock2 in blocking manner.
- Thread1 becomes blocked as Lock2 already taken by Thread2.
- Thread2 tries to acquire Lock1 in blocking manner.
- Thread2 becomes blocked as Lock1 already taken by Thread1.
As a result, Thread1 and Thread2 got stuck waiting each other in blocked state.
- Solution: we can use Locks Prioritizing. With this Thread1 and Thread2 should first try to acquire Lock1 and only then Lock2.
- Problem: When several Threads use several shared Locks, it might be situation when Threads do some job, but with no actual progress.
For example, we have Thread1 and Thread2 which both use Lock1 and Lock2. Live Lock happens with next steps:
- Thread1 acquires Lock1.
- Thread2 acquires Lock2.
Then Thread1 does next: - Thread1 tries to acquire Lock2 in non-blocking manner.
- Thread1 has no lock acquired as Lock2 already taken by Thread2.
- Thread1 is not blocked so can try to do something else.
- Thread1 releases Lock1.
- Thread1 does some other job.
- Thread1 returns to do the same: repeats steps 1, 3-6.
At the same time Thread2 does similar things: - Thread2 tries to acquire Lock1 in non-blocking manner.
- Thread2 has no lock acquired as Lock1 already taken by Thread1.
- Thread2 is not blocked so can try to do something else.
- Thread2 releases Lock2.
- Thread2 does some other job.
- Thread2 returns to do the same: repeats steps 2, 9-14.
As a result, it seems like Thread1 and Thread2 do something, but useful job is not accomplished. They doings are nothing more than just checking for Locks availability.
- Solution: we can use access randomization. With this Thread1 and Thread2 will acquire Locks in different times so number of unsuccessful tries will be decreased.
-
Problem: it's possible that Thread has some Exception thrown in runtime and going to finish its execution. In case such Thread managed to acquire Lock and ends without Lock releasing, other threads will not be able to acquire this Lock at any time in the future.
-
Solution: always surround critical section in try-catch-finally block. In try section we acquire Lock and do work needed, and in finally block we release Lock, so that Lock will be released in any case: success or fail.
-
Problem: by default, load is not spread in equal way to all Threads even when they do the same job and are started one by one. This becomes a problem if, for example, Thread processes request asked by end User. Once we have many Users, some Users will have request processed immediately, and others will wait long time.
-
Solution: depends on actual problem, not in scope of this demo.