===== git log ====
commit f4128fe44c2c935068c536219e8f0d52d7b6c1dd
Author: Shwetha Acharya <sacharya@redhat.com>
Date:   Tue Aug 9 12:41:45 2022 +0530

    Add GlusterFS 9.6 release notes (#3694)
    
    Updates: #3693
    Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>

commit a3652d02159ddc248fa8f2596ad08b8602dc7fb5
Author: Xavi Hernandez <xhernandez@redhat.com>
Date:   Mon Jul 4 09:12:28 2022 +0200

    locks: fix race on client disconnect to avoid stale locks (#3245)
    
    * locks: fix race on client disconnect to avoid stale locks
    
    The following sequence of actions leads to a stale posix lock:
    
    1. Client C1 sends write lock request L1. It's granted.
    
    2. Client C2 sends write lock request L2.
    
    3. L2 starts being processed by the brick's locks xlator, but nothing
       is created yet (an fd reference is held for this request).
    
    4. C2 disconnects.
    
    5. Brick's server xlator flushes all open fd's. This causes the removal
       of all locks from C2 (none in this case).
    
    6. Brick's server releases it's fd reference (in normal circumstances
       this should be the last one, but not in this case).
    
    7. Locks xlator continues processing L2 and adds it to the blocked
       list.
    
    8. Eventually C1 releases L1. L2 is granted.
    
    9. At this point the fd reference of L2 request is released. If it's
       the last one, pl_release() is called, which removes all locks on the
       fd. Otherwise L2 remains active indefinitely and blocks all other
       requests.
    
    This patch makes sure that the client is alive before adding a new lock.
    
    Fixes: #3182
    Change-Id: I8f2afa310388fbee159a60478ac72e371cd030e1
    Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>

More commit messages for this ChangeLog can be found at
https://forge.gluster.org/glusterfs-core/glusterfs/commits/v9.6
