Prosecution Insights
Last updated: April 19, 2026
Application No. 18/270,443

METHOD, APPARATUS, AND SYSTEM FOR CREATING TRAINING TASK ON AI TRAINING PLATFORM, AND MEDIUM

Non-Final OA §103
Filed
Jun 29, 2023
Examiner
MILLS, FRANK D
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Inspur Suzhou Intelligent Technology Co. Ltd.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
415 granted / 600 resolved
+14.2% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
21 currently pending
Career history
621
Total Applications
across all art units

Statute-Specific Performance

§101
16.2%
-23.8% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103
DETAILED ACTION Applicant cancels claim 9, adds new claims 12-21 by preliminary amendment. Claims 1-8 and 10-21 rejected under 35 USC § 103. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 10-15, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Seelam et al., U.S. PG-Publication No. 2020/0092392 A1, in view of Misra et al., U.S. PG-Publication No. 2017/0142217 A1, further in view of Parakh et al., U.S. Patent No. 9,621,399 B1, further in view of Zhao et al., U.S. PG-Publication No. 2023/0333898 A1. Claim 1 Seelam discloses a method for creating a training task on an Artificial Intelligence (AI) training platform. Seelam discloses a “method … to dynamically create a distributed storage cache on the node’s local storage and deploy the training jobs preferably on those nodes.” The method provides “for caching and data-aware placement for accelerations of machine learning application in a multi-tenant computing environment,” wherein data is “cached in a distributed data store to one or more local compute nodes of a cluster of nodes.” A training job is “scheduled, according to cache and data locality awareness, on the one or more local compute nodes with the cached data needed for execution.” Seelam, ¶¶ 23-24. Seelam discloses the method, comprising: dividing nodes of the AI training platform into a plurality of virtual groups in advance according to one or more of switch information of the nodes, local area network information, a total quantity of the nodes, and an application dataset. Seelam discloses that nodes 10 “may be grouped … physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds” (i.e., grouping based on local area network information and switch information of the nodes). Id. at ¶ 52. Seelam discloses receiving training task configuration information inputted by a user, and determining task configuration conditions according to the training task configuration information, the task configuration conditions comprising … a quantity of computing resources. Received deep learning (DL) jobs are received by the DL job scheduler 414 (e.g., cache-aware Scheduler.” The job requests are scheduled “on those compute nodes satisfying the requirements both in terms of resources (e.g., random access memory ‘RAM’) and in terms of data locality.” Id. at ¶ 64. Figure 7 illustrates functionality 700 “relating to data caching and data-aware placement for acceleration of machine learning applications.” User 702 “indicates an intention to train a job by describing the job (e.g., DL job description).” The job is deployed “with an indication of the selected dataset that is to be used and information about how many nodes the job requires.” Id. at ¶¶ 78-79. Seelam discloses determining whether there are first nodes satisfying the task configuration conditions among the nodes of the Al training platform, and in response to there being first nodes satisfying the task configuration conditions among the nodes of the Al training platform, selecting a target node from the first nodes according to a preset filtering method. The method makes “decisions on which of the nodes will receive the job allocation,” wherein the decision depends on “job requirements, compute capacity at the nodes (e.g., GPU/CPUs/memory) and storage capacity at the nodes. Depending on the decision, “cache microservice 412 can decide to … to cache the data only on a storage of a subset of the compute nodes such as, for example, the same set of nodes responsible for the job allocation” (subset of nodes → target node from the first nodes). Id. at ¶ 67. Seelam discloses creating a corresponding training task on the target node according to the training task configuration information, and obtaining the corresponding training dataset from a remote data center according to a remote storage path corresponding to the training dataset in the training task configuration information. Deploying a DL distributed job may designate “a number of nodes … to cache the dataset from remote storage.” Id. at ¶¶ 60, 63.The cache microservice 412 “can decide to cache the data in the global data store 420 … in the distributed data cache 424 to speed up access.” Id. at ¶ 67, See Also Id. at ¶ 62 (“global data store 420 may be a remotely located store”). The cluster nodes “are decoupled from remote storage, thus allowing the infrastructure operator to optimize resources usage and at the same time provide near local storage I/O bandwidth.” Id. at ¶ 23. Seelam discloses recording a storage path of the training dataset in the independent storage space of the target node. A job is deployed “with an indication of the selected dataset.” Id. at ¶ 78. If the allocated nodes “do not contain the cached datasets in the datasets cache 720, the cache controller 714 may be initiated to commence [bringing] in the dataset for the job to cache the dataset to the datasets cache 720” (i.e., caching the training dataset). Id. at ¶ 81. The DL job description comprises “a reference to the dataset indicating the user 702 intends to work with” (reference to dataset → storage path of the training dataset). Id. at ¶ 82. Seelam does not disclose dividing a preset quota of disk space from each of the nodes to form a shared storage space of each of the virtual groups, wherein each shared storage space corresponds to a distributed caching system; and the independent storage space being a remaining disk space divided from the disk space beyond the preset quota of disk space. Misra discloses dividing a preset quota of disk space from each of the nodes to form a shared storage space of each of the virtual groups, wherein each shared storage space corresponds to a distributed caching system; and the independent storage space being a remaining disk space divided from the disk space beyond the preset quota of disk space. Misra discloses “methods for adaptive partitioning of distributed cache systems,” wherein the “total cache of the cluster is divided into a number of slices, which are associated with the computer nodes of the cluster.” Misra, ¶ 12. The clusters “can implement distributed storage memory systems, e.g., distributed cache systems,” wherein “total storage capacity of the cluster is typically divided into a number [of] slices of some standard size,” and “[e]ach slice can be local to one and only one computer node” (slice of a standard size, each slice only on one node → preset quota of disk space from each of the nodes). Id. at ¶ 2. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of distributed caching for acceleration of machine learning applications of Seelam to incorporate the method of adaptive partitioning of distributed cache systems using standard size slices as taught by Misra. One of ordinary skill in the art would be motivated to integrate adaptive partitioning of distributed cache systems using standard size slices into Seelam, with a reasonable expectation of success, in order to enable a distributed system to reassign slices “to a node based on locality of access… which can result in minimizing the effect on the network,” thereby causing “network utilization reduction,” and also improves scalability, because the system can “better adapt in access patterns, for example, in case of new applications, application failures, or for load balancing.” See Misra, ¶ 19. Seelam-Misra does not expressly disclose caching the training dataset into an independent storage space of the target node … the independent storage space being a remaining disk space divided from the disk space beyond the preset quota of disk space. Parakh discloses caching the training dataset into an independent storage space of the target node … the independent storage space being a remaining disk space divided from the disk space beyond the preset quota of disk space. Parakh disclose a “distributed caching system (DCS) … that cache data items across multiple computing devices on a network.” Parakh, 2:39-41. Nodes in a system may all “include a local cache” (local cache → independent storage space). Id. at 3:21-36. The method “can support multiple levels of cache lookup,” wherein “the first level involves … performing a local lookup of the requested data items.” On local cache miss, the method proceeds “to the second level by searching on the network at the additional external caches 145a-c for the data items” (external cache → shared storage space). Id. at 4:52-61. If the external cache “search and finds the requested data” it then “sends the requested data to the front-end system 102” (i.e. target node) and the “front-end system 102 saves the data item to its own cache” (i.e., caches the data into an independent storage space of the target node). Id. at 11:55-67. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of distributed caching for acceleration of machine learning applications of Seelam-Misra to incorporate multiple levels of caching as taught by Parakh. One of ordinary skill in the art would be motivated to integrate multiple levels of caching into Seelam-Misra, with a reasonable expectation of success, because “local memory access is significantly faster [than] accessing memory over the network, thus, avoiding network calls tends to increased performance.” Parakh, 7:3-5. Seelam-Misra-Parakh does not disclose the task configuration conditions comprising a size of a training dataset. Zhao discloses the task configuration conditions comprising a size of a training dataset. Zhao discloses a “method … for a deep learning training task,” that obtains “deep learning training task parameters input by a user,” and selects “according to the deep learning training task parameters, a GPU that satisfies the deep learning task parameters and has a minimum remaining resource quantity from a single server node” in a multi-machine type task.” Zhao, ¶¶ 5, 7, 10. A GPU (i.e., node) is selected “that satisfies conditions of the neural network model, the data set, and the batch size and has a minimum remaining resource quantity to work.” Id. at ¶¶ 14, 16, 40. The “deep learning task parameters include … a training batch size” (training batch size → size of a training dataset). Id. at ¶ 71. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of distributed caching for acceleration of machine learning applications of Seelam-Misra-Parakh to incorporate selecting a node based on training batch size as taught by Zhao. One of ordinary skill in the art would be motivated to integrate selecting a node based on training batch size into Seelam-Misra-Parakh, with a reasonable expectation of success, in order to ensure “maximization of the utilization rate of the GPU resources.” Zhao, ¶ 77. Claim 2 Seelam discloses wherein after the determining that none of the nodes in the AI training platform satisfy the task configuration conditions, the method further comprises: determining whether there are first virtual groups with shared storage spaces satisfying the size of the training dataset among the virtual groups, and in response to there being the first virtual groups among the virtual groups, determining whether there are second nodes with computing resources satisfying the quantity of computing resources among the first virtual groups; in response to there being the second nodes, using the virtual groups corresponding to the second nodes as second virtual groups, and selecting a target virtual group from the second virtual groups; and when there is one of the second nodes in the target virtual group, using the one of the second nodes in the target virtual group as the target node, obtaining the corresponding training dataset form the remote data center through the corresponding distributed caching system, and caching the training dataset in to the shared storage space of the target virtual group. Seelam discloses that “[i]n the case where data is not cached on any subset of nodes, or where data is cached on some subset of nods, but those nodes do not have sufficient resources to host the job … then the cache scheduler 710 may decide to start caching the data on a different subset of nodes” (different subset of nodes → second nodes with computing resources satisfying the quantity of computing resources). Seelam, ¶ 80. Zhao discloses or when there is a plurality of the second nodes in the target virtual group, using a second node with a quantity of remaining computing resources closest to the quantity of computing resources in the target virtual group as the target node. Zhao discloses a “method … for a deep learning training task,” that obtains “deep learning training task parameters input by a user,” and selects “according to the deep learning training task parameters, a GPU that satisfies the deep learning task parameters and has a minimum remaining resource quantity from a single server node” in a multi-machine type task” (minimum remaining resource quantity → quantity of remaining resources closest to the quantity of computing resources). Zhao, ¶¶ 5, 7, 10. A GPU (i.e., node) is selected “that satisfies conditions of the neural network model, the data set, and the batch size and has a minimum remaining resource quantity to work.” Id. at ¶¶ 14, 16, 40. The “deep learning task parameters include … a training batch size” (training batch size → size of a training dataset). Id. at ¶ 71. Seelam discloses obtaining the corresponding training dataset from the remote data center through the corresponding distributed caching system, and caching the training dataset into the shared storage space of the target virtual group. A job is deployed “with an indication of the selected dataset.” Seelam, ¶ 78. If the allocated nodes “do not contain the cached datasets in the datasets cache 720, the cache controller 714 may be initiated to commence [bringing] in the dataset for the job to cache the dataset to the datasets cache 720” (i.e., caching the training dataset). Id. at ¶ 81. Claim 3 Zhao discloses determining whether there are nodes with independent storage spaces satisfying the size of the training dataset among the nodes of the AI training platform, and in response to there being nodes with independent storage spaces satisfying the size of the training dataset among the nodes of the AI training platform, determining whether there are first nodes with computing resource satisfying the quantity of computing resources among the nodes satisfying the size of the training dataset. Zhao discloses a “method … for a deep learning training task,” that obtains “deep learning training task parameters input by a user,” and selects “according to the deep learning training task parameters, a GPU that satisfies the deep learning task parameters and has a minimum remaining resource quantity from a single server node” in a multi-machine type task.” Zhao, ¶¶ 5, 7, 10. A GPU (i.e., node) is selected “that satisfies conditions of the neural network model, the data set, and the batch size and has a minimum remaining resource quantity to work.” Id. at ¶¶ 14, 16, 40. The “deep learning task parameters include … a training batch size” (training batch size → size of a training dataset). Id. at ¶ 71. Claim 4 Zhao discloses comparing the independent storage space of each of the first nodes with the size of the training dataset, and selecting a first node with the independent storage space closest to the size of the training dataset, as the target node. Zhao discloses a “method … for a deep learning training task,” that obtains “deep learning training task parameters input by a user,” and selects “according to the deep learning training task parameters, a GPU that satisfies the deep learning task parameters and has a minimum remaining resource quantity from a single server node” in a multi-machine type task” (minimum remaining resource quantity → closest to the size). Zhao, ¶¶ 5, 7, 10. A GPU (i.e., node) is selected “that satisfies conditions of the neural network model, the data set, and the batch size and has a minimum remaining resource quantity to work.” Id. at ¶¶ 14, 16, 40. The “deep learning task parameters include … a training batch size” (training batch size → size of a training dataset). Id. at ¶ 71. Claim 5 Parakh discloses determining whether the training dataset is cached in the independent storage space of each of the nodes of the AI training platform, in response to the training dataset being cached in the independent storage space of each of the nodes of the AI training platform , selecting the target node satisfying the quantity of computing resources from the nodes with a cached training dataset, and creating the training task on the target node. Figure 4 illustrates “a lookup routine 400 to lookup data items in the distributed caching system.” At 405, a node receives a first request for a first data item. At 410, the node “determines whether the first data items is stored in local in-memory cache 220 (sometimes referred to herein as an internal cache or a local cache).” At 415, a local cache hit causes the method to obtain the first data from the internal cache (435). Parakh, 12:1-51; FIG. 4. Parakh discloses in response to the training dataset being not cached in the independent storage space of each of the nodes of the AI training platform, determining whether the training dataset is cached in the shared storage space of each of the virtual groups. If the data “is not in the local in-memory cache (e.g., a cache miss)” the system proceeds to 420, where the node “identifies … an external cache designated to store the first data item” (e.g., shared storage space). At 422, the node “determines whether the external cache is storing the first data item.” An external cache hit causes the method to obtains the first data item from the external cache (425). Id. at 12:52-13:18; FIG. 4. Seelam discloses in response to there being a virtual group with the cached training dataset, determining whether there are nodes satisfying the quantity of computing resources from the nodes of the virtual group with the cached training dataset, in response to there being nodes satisfying the quantity of computing resources from the nodes of the virtual group with the cached training dataset, selecting the target node from the nodes satisfying the quantity of computing resources, and creating the training task on the target node. The method makes “decisions on which of the nodes will receive the job allocation,” wherein the decision depends on “job requirements, compute capacity at the nodes (e.g., GPU/CPUs/memory) and storage capacity at the nodes. Depending on the decision, “cache microservice 412 can decide to … to cache the data only on a storage of a subset of the compute nodes such as, for example, the same set of nodes responsible for the job allocation” (subset of nodes → target node from the first nodes). Seelam, ¶ 67. Seelam discloses in response to there being no virtual group with the cached training dataset or no node satisfying the quantity of computing resources, determining whether there are first nodes satisfying the task configuration conditions among the nodes of the AI training platform. Seelam discloses that “[i]n the case where data is not cached on any subset of nodes, or where data is cached on some subset of nods, but those nodes do not have sufficient resources to host the job … then the cache scheduler 710 may decide to start caching the data on a different subset of nodes” (different subset of nodes → second nodes with computing resources satisfying the quantity of computing resources). Seelam, ¶ 80. Claims 10 and 12-15 Claims 10 and 12-15 are rejected utilizing the aforementioned rationale for Claims 1-5; the claims are directed to a system performing the method. Claim 11 Claim 11 is rejected utilizing the aforementioned rationale for Claim 1; the claim is directed to a medium storing instructions corresponding to the method. Claim 19 Seelam discloses dividing the nodes that are located in a same local area network and disposed on a same switch into a same virtual group; or selecting some of the nodes according to a size of the application dataset, and dividing the some of the nodes into a same virtual group. Figure 4 illustrates that the DL job scheduler 414 may group nodes based on “job applications 422A, 422B, and/or 422C on those compute nodes satisfying the requirements both in terms of resources … and in terms of data locality” Seelam, ¶ 64. Metadata maintains information including “node placement, size of dataset and whether the datasets are cached or not” about a given job. Id. at ¶ 78. The decision for scheduling jobs using the cache-aware schedule are based on the “size of the cached data, and location of the cached data” (grouping nodes based on size of the cached data). Id. at ¶ 85. Claim 21 Parakh discloses in response to a remaining space of the shared storage space of each of the virtual groups not satisfying the size of the training dataset, or in response to each node in each of the second virtual groups not satisfying the quantity of computing resources, returning a reminder message about training task creation failure. Parakh discloses an embodiment wherein the method searches “whether a GPU that satisfies the deep learning training task parameters and has a minimum remaining resource quantity exists from in the single server node; and wait for next dispatch when the GPUs do not exist.” Parakh, ¶ 44. When the GPUs do not exist, it is considered that no appropriate resource is found in this dispatch, thereby waiting for next dispatch” (wait for next dispatch → reminder message about training task creation failure). Claims 6-8 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Seelam et al., U.S. PG-Publication No. 2020/0092392 A1, in view of Misra et al., U.S. PG-Publication No. 2017/0142217 A1, further in view of Parakh et al., U.S. Patent No. 9,621,399 B1, further in view of Zhao et al., U.S. PG-Publication No. 2023/0333898 A1, further in view of Yang et al., U.S. PG-Publication No. 2020/0302334 A1. Claim 6 Yang discloses in response to there being no first virtual group, reconfiguring the shared storage space of one or more of the virtual groups according to the size of the training dataset to update the shared storage space of one or more of the virtual groups. Yang discloses that “minibatch stochastic gradient descent (SGD) is a method for machine learning. Distributed implementations of minibatch SGD “use multiple learners each with a local copy of the machine learning model.” Yang, ¶ 2. Yang discloses a method “for distributed SGD … which minimized access to the storage system … by exploiting cached content, wherein a “minibatch is a subset of training data, e.g., resulting from the training data split into smaller batches.” Data loading is performed “from caches of other learners” via an “aggregated cache.” Id. at ¶ 18. The size of the cache “is allocated so that the cache is large enough to hold a subset of the dataset” (i.e., configuring the shared storage space according of the size of the training dataset). Id. at ¶ 24. Each node “stores a subset of the dataset” in a cache. The dataset is “split into partitions, and each of the compute nodes … can be allowed to load a partition. The size of the partitions “can be equal” (i.e., preset quota). Id. at ¶ 26. Further, Yang states that “there can be any number of compute nodes participating in the distributed machine learning.” Id. at ¶ 20. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of distributed caching for acceleration of machine learning applications of Seelam-Misra-Parakh-Zhao to incorporate the distributed caching methods taught by Yang. One of ordinary skill in the art would be motivated to integrate the distributed caching features into Seelam-Misra-Parakh-Zhao, with a reasonable expectation of success, in order to improve performance using “a data loading scheme … which can minimize both accesses to a storage system and communication traffic.” See Yang, ¶ 43. Claim 7 Yang discloses resetting the preset quota according to the size of the training dataset, and reconfiguring the shared storage space of one or more of the virtual groups according to a new preset quota to update the shared storage space of one or more of the virtual groups. The size of the cache “is allocated so that the cache is large enough to hold a subset of the dataset” (i.e., configuring the shared storage space according of the size of the training dataset). Yang, ¶ 24. Each node “stores a subset of the dataset” in a cache. The dataset is “split into partitions, and each of the compute nodes … can be allowed to load a partition. The size of the partitions “can be equal” (i.e., preset quota). Id. at ¶ 26. Further, Yang states that “there can be any number of compute nodes participating in the distributed machine learning.” Id. at ¶ 20. Claim 8 Yang discloses adding a new node to one or more of the virtual groups according to the size of the training dataset, and dividing a preset quota of disk space from the new node to the shared storage space of the one or more virtual groups to update the shared storage space of the one or more of the virtual groups. The size of the cache “is allocated so that the cache is large enough to hold a subset of the dataset” (i.e., configuring the shared storage space according of the size of the training dataset). Yang, ¶ 24. Each node “stores a subset of the dataset” in a cache. The dataset is “split into partitions, and each of the compute nodes … can be allowed to load a partition. The size of the partitions “can be equal” (i.e., preset quota). Id. at ¶ 26. Further, Yang states that “there can be any number of compute nodes participating in the distributed machine learning.” Id. at ¶ 20. Further, Yang discloses that performance is “gained by increasing the number of learners, such that the requirement of loading the whole dataset per training epoch does not pose a bottleneck.” Id. at ¶ 17. Claims 16-18 Claims 16-18 are rejected utilizing the aforementioned rationale for Claims 6-8; the claims are directed to a system performing the method. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Seelam et al., U.S. PG-Publication No. 2020/0092392 A1, in view of Misra et al., U.S. PG-Publication No. 2017/0142217 A1, further in view of Parakh et al., U.S. Patent No. 9,621,399 B1, further in view of Zhao et al., U.S. PG-Publication No. 2023/0333898 A1, further in view of Liu et al., "A Self-Organizing Distributed Memory Cache for Data Sharing Applications in Cluster Environment," 2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, Zhangjiajie, China, 2013, pp. 158-164. Claim 20 Liu discloses mounting the distributed caching system to each node in the respective virtual groups through filesystem in user space (FUSE). Liu discloses methods for programs running on different nodes that are “organized together and run as subprocesses of a MPI-based parallel program” (message passing interface). The program “maintains a distributed memory cache,” wherein “[a]ll the assigned nodes together form a working unit.” The distributed memory cache stores the files shared by all MPI processes. Liu, 158. The authors developed “a FUSE filesystem to mount and interact with the distributed memory cache.” Id. at 161. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of distributed caching for acceleration of machine learning applications of Seelam-Misra-Parakh-Zhao to incorporate the distributed caching using FUSE taught by Liu. One of ordinary skill in the art would be motivated to integrate the distributed caching features into Seelam-Misra-Parakh-Zhao, with a reasonable expectation of success, in order improve user flexibility by enabling users to “create their own filesystems without editing kernel code.” Liu, 161. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK D MILLS whose telephone number is (571)270-3172. The examiner can normally be reached M-F 10-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at (571)270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANK D MILLS/Primary Examiner, Art Unit 2194 February 21, 2026
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596575
DATA STREAMING PIPELINE FOR COMPUTE MAPPING SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12591453
METHOD AND SYSTEM FOR MULTI-CORE LOAD SCHEDULING IN AN OPERATING SYSTEM (OS) LESS COMMUNICATION NETWORK
2y 5m to grant Granted Mar 31, 2026
Patent 12566642
NODE MANAGEMENT METHOD, DEVICE AND APPARATUS, STORAGE MEDIUM, AND SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12554544
FRAMEWORK FOR PROVISIONING AN APPLICATION RESOURCE FOR AN APPLICATION IN USE WITH A CONTROLLED CONTENT REPOSITORY
2y 5m to grant Granted Feb 17, 2026
Patent 12554877
Context-Aware Text Sanitization
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
92%
With Interview (+22.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month