Prosecution Insights
Last updated: April 19, 2026
Application No. 18/740,944

Hybrid Model Of Fine-Grained Locking And Data Partitioning

Final Rejection §103
Filed
Jun 12, 2024
Examiner
ELLIS, MATTHEW J
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Netapp Inc.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
219 granted / 318 resolved
+13.9% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
17 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
6.5%
-33.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 318 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA and is in response to communications filed on 1/22/2026 in which claims 21-40 are presented for examination. Priority Acknowledgment is made of applicant’s parent Applications 17/717,294, 16/562,852, and 14/928,452 filed respectively on 04/11/2022, 09/06/2019, and 10/30/2015. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-25, 31-37, and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Khan et al. US 20140115596 A1 (hereinafter referred to as “Khan”) in view of Shavit et al. US 20070282838 A1 (hereinafter referred to as “Shavit”). As per claim 21, Khan teaches: A method for parallel data processing using a hybrid scheme of data partitioning and locking (Khan, [0130] – A codelet with a critical section may use a mutex lock acquisition as a data dependence and may release the lock when complete [0138] – Partition. Number of units of work that could be done in parallel. [0148] – Subidvided into memory segments), the method comprising: creating a plurality of domains in a storage system, wherein a domain of the plurality of domains comprises a grouping of processes executable by the storage system (Khan, [0013] – Methods and/or systems for representation, manipulation and/or execution of codeletsets. Codeletsets are groups of codelets that can be treated as a unit with respect to dependency analysis or execution, wherein codeletsets are interpreted as a grouping of processes. [0074] – Computational domain: a set of processing elements that are grouped by locality or function. These domains can hierarchically include other computational domains); defining a hierarchy of subdomains in each of the plurality of domains (Khan, [0074] – These domains can hierarchically include other computational domains), wherein each subdomain operates as an execution queue for processes therein (Khan, [0129] – Enabled codelets can also be migrated in this way. If a level finds that its queues are getting too full or that it is consuming too much power, it can migrate enabled codelets in the same way as described above) to operate on a different corresponding data partition of a plurality of data partitions (Khan, [0119] – TVM may provide a framework to divide work into small, non-preemptive blocks called codelets and schedule them efficiently at runtime. [0138] – An application can query the hardware or runtime system for the number of nodes available to the application, number of execution cores in a chip and memory availability, to help decide how to partition the problem. For example, the system can divide iterations into codelets); executing a first process from a subdomain of the subdomains on first data in a first partition of the plurality of data partitions associated with the subdomain (Khan, [0126] – The load balancer may analyze the work queue and event pool and may determine if work should be done locally (i.e., in this computational domain) or migrated elsewhere, wherein the different processes in the event pool are interpreted as the processes and the determination that the local domain should be used is interpreted as a partition associated with a subdomain. Paragraph [0129] – Domains and child domains are interpreted as domains and subdomains respectively), Although Khan teaches dividing tasks into codelets as well as utilizing domains and child domains for specific division, Khan doesn’t explicitly go into detail with partitioning, however, Shavit teaches: wherein the first partition relies on data partitioning to protect the first data (Shavit, [0022] – Asymmetric partitioning may be implemented for a given data object 120: e.g., one partition 130 of the given data object may be larger than another partition 130, so that the amount of data protected by a given lock of a set of locks 135 may differ from the amount of data protected by another lock); and executing the first process on second data in a second partition of the plurality of data partitions not associated with the subdomain (Shavit, [0025] – STMStart may begin a new transaction within an executing thread, wherein STMStart is the process that is executed. [0027] – When a data object 120 is allocated, memory for the locks 135 may be allocated adjacent to the memory for the corresponding partition 130, so that when a given partition is loaded into a cache, the lock for the partition may also be automatically loaded into the cache), wherein the process uses fine- grain locking to protect the second data (Shavit, [0008] – Relatively fine-grained locks (e.g., one lock per memory word) may be implemented by partitioning the data object appropriately). It would have been obvious for one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Khan’s invention in view of Shavit in order to include partitioning with fine-grain locking; this is advantageous because this allows the system for when a given partition is loaded into a cache, the lock for the partition may also be automatically loaded into the cache (Shavit, paragraph [0027]). As per claim 22, Khan as modified teaches: The method of claim 21, wherein the second partition is a finer partition within a coarse partition, wherein the fine-grain locking does not apply to the coarse partition as a whole (Khan, a hypervisor to control system resource allocation at a coarse level. [0138] – The application can divide a loop of one million iterations into one thousand iteration codelets, whereas if there are only four cores, it could divide the work into coarser grained blocks because there is no more concurrency to be gained from the hardware and the overhead of fewer codelets is lower, wherein codelets are interpreted as logical partitions because they are divided from one another. Also, the one million iterations is interpreted as the coarse partition wherein the subdivisions of codelets are interpreted as the finer partitions. See also [0134]). As per claim 23, Khan as modified teaches: The method of claim 22, comprising: attempting to execute a second process on the second partition while the first process is executing on the second data, wherein the second process is from a second subdomain of the subdomains associated with the second partition (Shavit, [0008] – The method may include using a first non-blocking transaction (such as a hardware transactional-memory (HTM) transaction) to attempt to complete a programmer-specified transaction (e.g., a transaction whose boundaries are indicated by the programmer within application code), wherein attempting to complete a transaction is interpreted as attempting to execute a process, and the boundaries are interpreted as a partition); and in response to determining the second partition is locked, waiting until the process is finished executing on the second data for the second partition to unlock (Shavit, [0039] – If the partition being considered is already locked in WRITE mode by another thread (as detected in block 410), the non-blocking transaction may be aborted and retried after a back-off delay (block 415)). As per claim 24, Khan as modified teaches: The method of claim 22, comprising: before executing the process on the second data, migrating the second partition to using the fine-grained locking (Khan, [0100] – Migration of executing or soon-to-executed codeletsets to exploit locality of resources such as local memory) including: configuring the coarse data partition to exclude one or more fine partitions included therein, other than the second partition, that are subject to the fine-grained locking (Shavit, [0021] – Different partitioning granularities for data object 120, and therefore different locking granularities, may be implemented in various embodiments. For some types of concurrent applications and corresponding data objects 120, fairly coarse-grained locks may be sufficient to provide the desired performance. User input may be used to determine partition boundaries and/or the mappings between partitions 130 and corresponding locks 135). As per claim 25, Khan as modified teaches: The method of claim 21, comprising: in response to determining the process has finished executing on the first data (Shavit, [0031] – Return an indication of success if the first HTM transaction succeeds in completing the programmer-specified transaction), identifying a second process from the subdomain queued to execute on the first partition (Khan, [0126] – When all input dependencies are met, the codelet scheduler may place the codelet in the work queue, in certain scenarios reordering the priority of the ready codelets in the queue. Worker cores may repeatedly take tasks from the work queue and run them to completion); and executing the second process on the first data (Khan, [0126] – Worker cores may repeatedly take tasks from the work queue and run them to completion. See also [0182]). Claims 31-32, and 34-37 are directed to a non-transitory machine-readable medium, performing steps recited in claims 21--25 with substantially the same limitations. Therefore, the rejections made to claims 21--25 are applied to claims 31-32, and 34-37. As per claim 33, Khan as modified teaches: The non-transitory machine-readable medium of claim 32, wherein the instructions cause the at least one machine to: associate each one of the subdomains with a respective data partition of a plurality of data partitions, wherein the execution queue regulates when the processes can access the respective data partition (Khan, [0129] – A fractal hierarchical network of monitoring domains may achieve regulation of a data processing system. For example, in a basic cluster, domains may be: cluster, node, socket, core, hardware thread. A process (which may be the scheduler) at each leaf domain may monitor the health of the hardware and the application (e.g., power consumption, load, progress of program completion, etc). Monitors at higher levels in the hierarchy may aggregate the information from their child domains (and may optionally add information at their domain--or may require that all monitoring is done by children) and may pass information up to their parents). Claim 40 is directed to a system performing steps recited in claim 21 with substantially the same limitations. Therefore, the rejection made to claim 40 is applied to claim 21. Claims 26-30, and 38-39 are rejected under 35 U.S.C. 103 as being unpatentable over Khan in view of Shavit and further in view of Grunwald et al. US 8627331 B1 (hereinafter referred to as “Grunwald”). As per claim 26, Khan as modified doesn’t explicitly teach subdomains in detail, however, Grunwald teaches: The method of claim 21, comprising: executing another process from another subdomain of the subdomains on other data in another partition of the plurality of data partitions associated with the other subdomain (Gunwald, column 2, lines 63-67-column 3, lines 1-2 – The technique includes creating a hierarchy of subdomains within the Filesystem domain, where each subdomain owns one or more types of processes and each subdomain operates as a separate execution queue (i.e., only one process can execute at a time in each subdomain). Some of these subdomains are associated with metadata and others are associated with user data), wherein the other process executes in parallel with the process (Grunwald, column 3, lines 7-12 – Based on that structure, certain subdomains are permitted to execute their processes in parallel with the processes of other subdomains, while other subdomains are precluded from executing their processes in parallel with those of any other subdomain or with those of certain other subdomains). It would have been obvious for one of ordinary skill in the art at the time of the filing of the application to modify Khan’s invention as modified in view of Grunwald in order to divide processes with subdomains; this is advantageous it allows child subdomains to be locked based on a parent lock. Locking different processes is also a known technique used to improve similar devices which utilize a multiprocessing system. This technique yields the predictable result of allowing a high degree of parallelization which improves throughput of the processing system (Grunwald, Abstract and column 3, lines 10-17). As per claim 27, Khan as modified with Grunwald teaches: The method of claim 26, comprising: attempting to execute the other process on the first data (Grunwald, column 2, lines 58-60 – Execution of processes in the network storage server is scheduled based on the plurality of mutual exclusion domains); and preventing the other process from executing on the first data when the other subdomain does not have a vertical relationship with the subdomain (Grunwald, column 7, lines 11-20 – This hierarchy permits certain subdomains to execute their processes in parallel with processes of other subdomains, but prevents other subdomains from executing their processes in parallel with processes of any other subdomain or with processes of some other subdomains). As per claim 28, Khan as modified with Grunwald teaches: The method of claim 26, comprising: executing the other process on the first data when the other subdomain has an ancestral relationship with the subdomain, wherein the other process executes on the first data when the process is not executing on the first data (Grunwald, column 7, lines 11-20 – Any subdomains that have an ancestral (vertical) relationship to each other within the hierarchy are precluded by the scheduler 48 from executing their processes in parallel with each other, whereas subdomains that do not have an ancestral relationship to each other within the hierarchy are normally permitted to execute their processes in parallel with each other.). As per claim 29, Khan as modified with Grunwald teaches: The method of claim 28, wherein the subdomain is a child of the other subdomain (Grunwald, column 7, lines 28-35 – When running a process in a subdomain, that subdomain in essence has an exclusive writer lock on all of its child subdomains). As per claim 30, Khan as modified with Grunwald teaches: The method of claim 21, wherein the first data comprises user data and the second data comprises metadata (Grunwald, column 6, lines 55-64 – Processes are associated with one or more specific types of data or metadata upon which they operate; consequently, most of the subdomains can be viewed as being associated with one or more particular classes of data or metadata. Hence, some of these subdomains are dedicated for specific types of metadata and associated processes while others are dedicated for user data and associated processes). Claims 38-39 are directed to a non-transitory machine-readable medium, performing steps recited in claims 26-27 with substantially the same limitations. Therefore, the rejections made to claims 26-27 are applied to claims 38-39. Response to Arguments Applicant's arguments filed 1/22/2026 have been fully considered but they are not persuasive. Applicant argues in Remarks of 1/22/2026 that Khan as modified with Shavit doesn’t adequately teach grouping processes executable by the processing elements into a domain. Shavit, likewise fails to disclose the domain recited by claim 21. In response, Khan teaches in [0013] – Codeletsets which are groups of processes. Khan also teaches in [0074] – Computational domain: a set of processing elements that are grouped by locality or function. This appears to read on the claimed limitation. Applicant argues that Khan as modified with Shavit doesn’t adequately teach subdomains where each one operates as an execution queue for processes therein to operate on a different corresponding data partition of data partitions. Further search and consideration of Khan shows that Khan teaches in [0129] – Enabled codelets can also be migrated in this way. If a level finds that its queues are getting too full or that it is consuming too much power, it can migrate enabled codelets in the same way as described above. In other words, each level has its own queue wherein this is interpreted as each subdomain operates as its own execution queue for processes. Applicant finally argues that Khan as modified with Shavit conflicts with the rejection equating the domain’s grouping of processes to Khan’s set of processing elements. In response, Khan teaches in [0147] – The codeletset system may use a metric space distance model to initially allocate code and data to appropriate local processing elements, and can migrate code and data dynamically, as may be deemed beneficial to optimize system performance with reference to the current goals. This doesn’t appear to contradict the claims. Khan clear teaches that processes can be initially executed in a local system, then later migrate the code to other domains, levels/children of domains based on performance optimizations. In conclusion, based on further search and consideration, the prior art of record teaches the claimed limitations in a broadest reasonable interpretation of the claims in view of what is disclosed in the specification. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Lasperas et al. US 20170109295 A1 teaches fine-grained and course-grained locking in paragraph [0101]. Nadathur et al. US 20140143789 A1 teaches fine-grained locking manages concurrent execution on multiple processors by dividing a task into many smaller pieces of code in paragraph [0005]. Kahlon et al. US 20120079483 A1 teaches automatic lock insertion in concurrent programs (Title). LaSalle et al., 2013, "Multi-Threaded Graph Partitioning", Department of Computer Science & Engineering University of Minnesota. Bhargava et al. et al. US 8868506 B1 teaches (1) a high-performance database system for storing assets and the associated metadata, (2) computing an inverse delta between two files without generating any intermediate files or deltas, (3) uniquely identifying a digital asset and storing the digital asset's namespace change history in a version control system, (4) inferring dependencies amongst namespace changes, (5) a workflow management tool that is tightly integrated with a version control system, (6) publishing milestones in a project which can consistently maintain the namespace uniqueness invariant, and/or (7) implicitly versioning data and/or files that are associated with certain types of digital assets in a version control system (Abstract). Testardi et al. US 20070016754 A1 teaches A control path that manages one or more fast paths. The fast path and the control path are utilized in mapping virtual to physical addresses using mapping tables (Abstract). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew Ellis whose telephone number is (571)270-3443. The examiner can normally be reached on Monday-Friday 8AM-5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached on (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. March 13, 2026 /MATTHEW J ELLIS/Primary Examiner, Art Unit 2152
Read full office action

Prosecution Timeline

Jun 12, 2024
Application Filed
Sep 05, 2024
Response after Non-Final Action
Jul 18, 2025
Non-Final Rejection — §103
Oct 28, 2025
Interview Requested
Nov 03, 2025
Applicant Interview (Telephonic)
Nov 08, 2025
Examiner Interview Summary
Jan 22, 2026
Response Filed
Mar 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602545
WIDE AND DEEP NETWORK FOR LANGUAGE DETECTION USING HASH EMBEDDINGS
2y 5m to grant Granted Apr 14, 2026
Patent 12591551
GENERATION METHOD, SEARCH METHOD, AND GENERATION DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12579136
SEMANTIC PARSING USING EMBEDDING SPACE REPRESENTATIONS OF EXAMPLE NATURAL LANGUAGE QUERIES
2y 5m to grant Granted Mar 17, 2026
Patent 12572571
LEARNING OPTIMIZED METALABEL EMBEDDED RANGE SEARCH STRUCTURES
2y 5m to grant Granted Mar 10, 2026
Patent 12536135
TEMPLATE APPLICATION PROGRAM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+30.9%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 318 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month