Prosecution Insights
Last updated: April 19, 2026
Application No. 17/879,835

SYSTEMS AND METHODS FOR AUTO-TIERED DATA STORAGE FOR DATA INTENSIVE APPLICATIONS

Non-Final OA §101§103§112
Filed
Aug 03, 2022
Examiner
ZECHER, CORDELIA P K
Art Unit
2100
Tech Center
2100 — Computer Architecture & Software
Assignee
Ovh
OA Round
2 (Non-Final)
50%
Grant Probability
Moderate
2-3
OA Rounds
3y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
253 granted / 509 resolved
-5.3% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
287 currently pending
Career history
796
Total Applications
across all art units

Statute-Specific Performance

§101
19.0%
-21.0% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 509 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment According to paper filed November 20th 2025, claims 1-20 are pending for examination with an August 13th 2021 priority date under 35 USC §119(a)-(d) or (f). Claims 1-3, 6, 15-17, and 19-20 are amended. No claim is canceled or added. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. §112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. §112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10 and 13 are rejected under 35 U.S.C. §112(b) or 35 U.S.C. §112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention. Claim 10 recites the limitation "the first identifier" in the limitation of “if determination is made, by the server, that the first and second identifiers are identical, transmitting an acknowledge signal to the user device indicative of a successful modification of the data object”. There is insufficient antecedent basis for this limitation in the claim. In claims 1 and 10, “an identifier”, “the first identifier”, and “a second identifier” are recited; lack of antecedent basis for “the first identifier”. Claim 13 recites “a dedicated processing unit” that is unclear. It is unclear what characteristics of a processing unit would be construed as “dedicated”? Said feature is not cited in the present Office action until further clarification provided. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-14 and 20 are directed to a method, and claims 15-19 are directed to a server. Thus, each of these claims is directed to one of the four statutory categories of patent eligible subject matter. Step 2A Prong 1: Claims 1, 15, and 20 recite: “maintaining, by the server, a list of modifications executed on the training dataset that occurred on the virtual object storage service since the fetched training dataset has been stored on the first local storage device, each entry of the list of modifications comprising at least one of an identifier of a data object of the training dataset, a type of modification made to the data object and/or a temporal indication associated with a modification made to the data object”; maintaining is an evaluation that can be carried out by a human in the mind or with pen and paper, and is thus a mental process. “after receiving, by the server, a request to initiate training of the machine learning model, generating a synchronized training dataset by applying the list of modifications to the fetched training dataset, the synchronized training dataset mirroring the training dataset stored in the virtual object storage service”; a request to initiate training is an evaluation that can be carried out by a human in the mind or with pen and paper, and is thus a mental process. Claim 2 recites: “identifying, by the server, based on the list of modifications, a first set of data objects of the training dataset, the data objects of the first set of data objects having been subject to a modification since the storing of the fetched training dataset in the first local storage device”; identifying data objects is an evaluation that can be carried out by a human in the mind or with pen and paper, and is thus a mental process. “identifying, by the server, based on the list of modifications, a second set of data objects of the training dataset, the data objects of the second set of data objects having not been subject to a modification since the storing of the fetched training dataset in the first local storage device”; identifying data objects is an evaluation that can be carried out by a human in the mind or with pen and paper, and is thus a mental process. Step 2A Prong 2: This judicial exception is not integrated into a practical application because the additional elements are as follows: Claims 1, 15, and 20 recite: “fetching, by the server, from the virtual object storage service, the training dataset; copying the fetched training dataset on a first local storage device, the first local storage device being communicably connected to the server”; this limitation amounts to data gathering and insignificant extra solution activity, as per MPEP 2106.05(g) & MPEP 2106.05(d)(II). “fetching training data from the synchronized training dataset stored in the second local storage device as the training of the machine learning model is executed”; this limitation amounts to data gathering and insignificant extra solution activity, as per MPEP 2106.05(g) & MPEP 2106.05(d)(II). Claim 2 recites: “fetching the first set of data objects from the virtual object storage service; fetching the second set of data objects from the first local storage device; and storing the synchronized training dataset in a second local storage device comprises: storing the fetched first and second set of data objects in the second local storage device”; this limitation amounts to data gathering and insignificant extra solution activity, as per MPEP 2106.05(g) & MPEP 2106.05(d)(II). Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: Claims 1, 15, and 20 recite: “training a machine learning model based on a training dataset, the training dataset being formed by data objects distributed across a virtual object storage service, the method being executable by a server configured to access the virtual object storage service”; this limitation amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f). “storing the synchronized training dataset in a second local storage device, the second local storage device being communicably connected to the server, the second local storage device having a lower data retrieval latency than the first local storage device”; this limitation amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f). Dependent Claims Claims 2-3, 6-8, 11-14, 16-17, and 19 are also rejected under 35 USC 101 for the following reasons: Claims 2 and 16 recite: “fetching the first set of data objects from the virtual object storage service; fetching the second set of data objects from the first local storage device; and storing the synchronized training dataset in a second local storage device comprises: storing the fetched first and second set of data objects in the second local storage device”; this limitation amounts to well-understood, routine, conventional activity and fails to integrate the judicial exception into a practical application as per MPEP 2106.05(d). Claims 3 and 17 recite: “identifying, by the server, based on the list of modifications, a first set of data objects of the training dataset, the data objects of the first set of data objects having been subject to a modification since the storing of the fetched training dataset in the first local storage device”; this limitation amounts to well-understood, routine, conventional activity and fails to integrate the judicial exception into a practical application as per MPEP 2106.05(d). “executing, by the server, modifications of the list of modifications that correspond to relevant entries of the list of modifications on corresponding data objects of a copy of the fetched training dataset, the relevant entries being indicative of modifications executed after a storing the fetched training dataset in a first local storage device”; this limitation amounts to well-understood, routine, conventional activity and fails to integrate the judicial exception into a practical application as per MPEP 2106.05(d). Claims 6 and 19 recite: “wherein the machine learning model is a first machine learning model, the training dataset is a first training dataset, the synchronized training dataset is a first synchronized training dataset, and the list of modifications is a first list of modifications, and the virtual object storage service comprises second data objects distributed thereacross forming a second training dataset”; this limitation amounts to well-understood, routine, conventional activity and fails to integrate the judicial exception into a practical application as per MPEP 2106.05(d). Claim 7 recites: “wherein the training of the first machine learning model is executed in response to receiving, by the server, a first request signal from a first user device associated with a first user, and the training of the second machine learning model is executed in response to receiving, by the server, a second request signal from a second user device associated with a second user”; this limitation amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f). Claim 8 recites: “wherein the server and the virtual object storage service are communicably connected to a user device associated with a user, the generation of the synchronized training dataset on the second storage device resulting from the reception, by the server, of the request signal for training the machine learning model emitted by the user device”; this limitation amounts to well-understood, routine, conventional activity and fails to integrate the judicial exception into a practical application as per MPEP 2106.05(d). Claim 11 recites: “wherein fetching, by the server, from the virtual object storage service, the training dataset comprises generating a snapshot of the virtual object storage service, and storing the fetched training dataset in the first local storage device comprises storing the snapshot in the first local storage device, the list of modification being indicative of modifications executed on the training dataset distributed across the virtual object storage service since the generation of the snapshot”; this limitation amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f). Claim 12 recites: “wherein the snapshot is updated at a predetermined frequency by generating a new snapshot of the training dataset”; this limitation amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f). Claim 13 recites: “wherein the training of the machine learning model is executed by a dedicated processing unit communicably connected to a memory configured for receiving the fetched synchronized training dataset, the memory being communicably connected to the second local storage device”; this limitation amounts to well-understood, routine, conventional activity and fails to integrate the judicial exception into a practical application as per MPEP 2106.05(d). Claim 14 recites: “a non-transitory computer-readable medium comprising computer-readable instructions that, upon being executed by a system, cause the system to perform the method of claim 1”; this limitation amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §102 and §103 (or as subject to pre-AIA 35 U.S.C. §102 and §103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. §103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. §102(b)(2)(C) for any potential 35 U.S.C. §102(a)(2) prior art against the later invention. Claims 1, 5-9, 11-15, and 20 are rejected under 35 U.S.C. §103 as being unpatentable over Botelho et al. (US 2022/0374519), hereinafter Botelho, and further in view of Reda et al. (WO 2021/045904), hereinafter Reda, and Huber et al. (US 2016/0285881), hereinafter Huber. Claim 1 “A method for training a machine learning model based on a training dataset, the training dataset being formed by data objects distributed across a virtual object storage service, the method being executable by a server configured to access the virtual object storage service” Botelho [0066] teaches a data center includes one or more servers and one or more storage devices, the storage appliance may include a data management system for backing up virtual machines and/or within a virtualized infrastructure, “fetching, by the server, from the virtual object storage service, the training dataset; copying the fetched training dataset on a first local storage device, the first local storage device being communicably connected to the server” Botelho [0183] teaches a Network Attached Storage (NAS) system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients; “maintaining, by the server, a list of modifications executed on the training dataset that occurred on the virtual object storage service since the fetched training dataset has been stored on the first local storage device, each entry of the list of modifications comprising at least one of an identifier of a data object of the training dataset, a type of modification made to the data object and/or a temporal indication associated with a modification made to the data object” Botelho [0146] teaches a list of streamLoglds, the last log in this list is the one which is being replicated or is the next candidate for replication, and may be stored to detect a last modification, lists in the data structure may be checked to determine those which have not been updated beyond a specified time and purged; Botelho fails to teach modifications comprising identifiers of data object, which is taught in Huber claim 11, data status based on identifier data, modifying, by the system; “after receiving, by the server, a request to initiate training of the machine learning model, generating a synchronized training dataset by applying the list of modifications to the fetched training dataset, the synchronized training dataset mirroring the training dataset stored in the virtual object storage service” Reda [0205][0217] teaches a computer cluster communication used to synchronization and data exchange, and one or more MMU(s), i.e., memory management units, may be synchronized with other MMUs within system; the applying modification list is taught in Botelho [0146]: a list of streamLoglds, the last log in this list is the one which is being replicated or is the next candidate for replication, and may be stored to detect a last modification, lists in the data structure may be checked to determine those which have not been updated beyond a specified time and purged; “storing the synchronized training dataset in a second local storage device, the second local storage device being communicably connected to the server, the second local storage device having a lower data retrieval latency than the first local storage device” Botelho [0183] teaches a Network Attached Storage (NAS) system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients; “fetching training data from the synchronized training dataset stored in the second local storage device as the training of the machine learning model is executed” Botelho [0183] teaches a Network Attached Storage (NAS) system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients. Botelho, Huber, and Reda disclose analogous art. Huber is analogous because it is in the field of video interpolation using one or more neural networks, and Reda is analogous because it is in the field of wireless communications and management of access to femto cell coverage. Botelho does not spell out the “data identifier” and “synchronizing local and virtual” memory as recited above. Said features are taught in Huber and Reda respectively. Hence, it would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said features of Huber (Huber claim 11: data status based on identifier data, modifying, by the system) and Reda (Reda [0205][0217]: a computer cluster communication used to synchronization and data exchange, and one or more MMU(s), i.e., memory management units, may be synchronized with other MMUs within system) into Botelho to enhance its data synchronization and management functions between virtual and local storages, and further enhance its data modification functions by matching data identifiers. Claim 5 “discarding the synchronized training dataset from the second local storage device” Botelho [0106][0114] [0134] teaches deletion operation. Claim 6 “wherein the machine learning model is a first machine learning model, the training dataset is a first training dataset, the synchronized training dataset is a first synchronized training dataset, and the list of modifications is a first list of modifications, and the virtual object storage service comprises second data objects distributed thereacross forming a second training dataset” Botelho [0146] teaches a list of streamLoglds, the last log in this list is the one which is being replicated or is the next candidate for replication, and may be stored to detect a last modification, lists in the data structure may be checked to determine those which have not been updated beyond a specified time and purged; “subsequent to discarding the synchronized training dataset from the second local storage device: fetching, by the server, from the virtual object storage service, the second training dataset; storing the fetched second training dataset in the first local storage device, the first local storage device being communicably connected to the server” Botelho [0183] teaches a Network Attached Storage (NAS) system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients; “maintaining, by the server, a second list of modifications executed on the second training dataset distributed across the virtual object storage service that occurred since the fetched second training dataset has been stored on the first local storage device, each entry of the second list of modifications comprising at least one of an identifier of a data object of the second training dataset, a type of modification made to said data object and/or a temporal indication associated with a modification made to said data object” Botelho [0146] teaches a list of streamLoglds, the last log in this list is the one which is being replicated or is the next candidate for replication, and may be stored to detect a last modification, lists in the data structure may be checked to determine those which have not been updated beyond a specified time and purged; Botelho fails to teach modifications comprising identifiers of data object, which is taught in Huber claim 11, data status based on identifier data, modifying, by the system; “upon receiving, by the server, a request to initiate training of a second machine learning model, generating a second synchronized training dataset based on the fetched second training dataset and the second list of modifications, the second synchronized training dataset mirroring the second training dataset stored in the virtual object storage service” Reda [0205][0217] teaches a computer cluster communication used to synchronization and data exchange, and one or more MMU(s), i.e., memory management units, may be synchronized with other MMUs within system; “subsequently to discarding the first synchronized training dataset from the second storage device” Botelho [0106][0114][0134] teaches deletion operation, “storing the second synchronized training dataset in the second local storage device; and fetching training data from the second synchronized training dataset stored in the second local storage device as the training of the second machine learning model is executed” Botelho [0183] teaches a Network Attached Storage (NAS) system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients. Claim 7 “wherein the training of the first machine learning model is executed in response to receiving, by the server, a first request signal from a first user device associated with a first user, and the training of the second machine learning model is executed in response to receiving, by the server, a second request signal from a second user device associated with a second user” a first user or a second user request is considered “user action” which does not carry patentable weight, hence, “upon receiving a request to initiate training” is cited; Reda [0205][0217] teaches a computer cluster communication used to synchronization and data exchange, and one or more MMU(s), i.e., memory management units, may be synchronized with other MMUs within system. Claim 8 “wherein the server and the virtual object storage service are communicably connected to a user device associated with a user, the generation of the synchronized training dataset on the second storage device resulting from the reception, by the server, of the request signal for training the machine learning model emitted by the user device” user device (i.e., local storage device) is cited herein, Botelho [0183] teaches a Network Attached Storage (NAS) system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients. Claim 9 “wherein the user device is a plurality of user devices, each user device being associated with a distinct user and a corresponding training dataset distributed across the virtual object storage service” Botelho [0183] teaches a Network Attached Storage (NAS) system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients. Claim 11 “wherein fetching, by the server, from the virtual object storage service, the training dataset comprises generating a snapshot of the virtual object storage service, and storing the fetched training dataset in the first local storage device comprises storing the snapshot in the first local storage device, the list of modification being indicative of modifications executed on the training dataset distributed across the virtual object storage service since the generation of the snapshot” Botelho [0017]-[0024] teaches various snapshot operations. Claim 12 “wherein the snapshot is updated at a predetermined frequency by generating a new snapshot of the training dataset” Botelho [0097] teaches snapshot frequency. Claim 13 “wherein the training of the machine learning model is executed by a dedicated processing unit communicably connected to a memory configured for receiving the fetched synchronized training dataset, the memory being communicably connected to the second local storage device” Reda [0205][0217] teaches a computer cluster communication used to synchronization and data exchange, and one or more MMU(s), i.e., memory management units, may be synchronized with other MMUs within system. Claims 14-15 Claims 14 and 15 are each rejected for the similar rationale given for claim 1. Claim 20 Claim 20 is rejected for the similar rationale given for claim 1. Claims 2-4, 10, and 16-19 are rejected under 35 U.S.C. §103 as being unpatentable over Botelho et al. (US 2022/0374519), hereinafter Botelho, in view of Reda et al. (WO 2021/045904), hereinafter Reda, and Huber et al. (US 2016/0285881), hereinafter Huber, and further in view of Potyraj et al. (US 2022/0075546), hereinafter Potyraj. Claim 2 “identifying, by the server, based on the list of modifications, a first set of data objects of the training dataset, the data objects of the first set of data objects having been subject to a modification since the storing of the fetched training dataset in the first local storage device” Botelho [0183] teaches a Network Attached Storage (NAS) system, and Botelho [0146] teaches a list of streamLoglds, the last log in this list is the one which is being replicated or is the next candidate for replication, and may be stored to detect a last modification, lists in the data structure; “identifying, by the server, based on the list of modifications, a second set of data objects of the training dataset, the data objects of the second set of data objects having not been subject to a modification since the storing of the fetched training dataset in the first local storage device” Potyraj [0113] teaches a timestamp of an audit log with entries that typically include destination and source addresses, and user login information for compliance with various regulations; “fetching the first set of data objects from the virtual object storage service; fetching the second set of data objects from the first local storage device; and storing the synchronized training dataset in a second local storage device comprises: storing the fetched first and second set of data objects in the second local storage device” Reda [0205][0217] teaches a computer cluster communication used to synchronization and data exchange, and one or more MMU(s), i.e., memory management units, may be synchronized with other MMUs within system, and Potyraj [0132] teaches storing data that is accessed frequently in faster storage tiers while data that is accessed infrequently stored in slower storage tiers. Botelho, Huber, and Reda, and Potyraj disclose analogous art. Huber is analogous because it is in the field of video interpolation using one or more neural networks, and Reda is analogous because it is in the field of wireless communications and management of access to femto cell coverage. Potyraj is analogous because it is in the field of distributed application placement from amongst a plurality of disparate storage environments. Botelho does not spell out the “data identifier” and “synchronizing local and virtual” memory as recited above. Said features are taught in Huber and Reda respectively. Hence, it would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said features of Huber (Huber claim 11: data status based on identifier data, modifying, by the system) and Reda (Reda [0205][0217]: a computer cluster communication used to synchronization and data exchange, and one or more MMU(s), i.e., memory management units, may be synchronized with other MMUs within system) into Botelho to enhance its data synchronization and management functions between virtual and local storages, and further enhance its data modification functions by matching data identifiers. Further, Botelho fails to teach the “data objects of the second set of data objects having not been subject to a modification since the storing of the fetched training dataset in the first local storage device” as recited. Said feature is taught in Potyraj. It would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said feature of Potyraj (Potyraj [0113]: a timestamp of an audit log with entries that typically include destination and source addresses) into Botelho to enhance its determination function of whether a data object set has not been subject to modification. Claim 3 “identifying, by the server, based on the list of modifications, a first set of data objects of the training dataset, the data objects of the first set of data objects having been subject to a modification since the storing of the fetched training dataset in the first local storage device” Botelho [0183] teaches a Network Attached Storage (NAS) system, and Botelho [0146] teaches a list of streamLoglds, the last log in this list is the one which is being replicated or is the next candidate for replication, and may be stored to detect a last modification, lists in the data structure; “executing, by the server, modifications of the list of modifications that correspond to relevant entries of the list of modifications on corresponding data objects of a copy of the fetched training dataset, the relevant entries being indicative of modifications executed after a storing the fetched training dataset in a first local storage device” Potyraj [0113] teaches a timestamp of an audit log with entries that typically include destination and source addresses, and user login information for compliance with various regulations; a timestamp comparison can indicate modifications made after a storing fetch. Claim 4 “wherein the modifications are CREATE operations, indicative of a creation, by the server, of a new data object in the training dataset, WRITE operations, indicative of a modification, by the server, of one of the data objects of the training dataset, READ operations, indicative of an retrieval, by the server, of one of the data objects of the training dataset, DELETE operations, indicative of a deletion, by the server, of one of the data objects of the training dataset, or a combination thereof” Botelho in view of Potyraj teaches various modification operations, read, write, create, and delete are basic modification operations. Claim 10 “generating, by the server, a new entry in the list of modifications, the new entry being indicative of the second identifier of the data object and the information of the WRITE signal; if determination is made, by the server, that the first and second identifiers are identical, transmitting an acknowledge signal to the user device indicative of a successful modification of the data object” Huber claim 11, data status based on identifier data, modifying, by the system. Claim 10 is also rejected for the rationale given for claims 1 and 4. Claims 16-19 Claims 16-19 are rejected for the similar rationale given for claims 2-3 and 5-6 respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUAY HO whose telephone number is (571)272-6088. The examiner can normally be reached Monday to Friday 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at 571-270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ruay Ho/Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Aug 03, 2022
Application Filed
Aug 19, 2025
Non-Final Rejection — §101, §103, §112
Nov 20, 2025
Response Filed
Dec 23, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583466
VEHICLE CONTROL MODULES INCLUDING CONTAINERIZED ORCHESTRATION AND RESOURCE MANAGEMENT FOR MIXED CRITICALITY SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12578751
DATA PROCESSING CIRCUITRY AND METHOD, AND SEMICONDUCTOR MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12561162
AUTOMATED INFORMATION TECHNOLOGY INFRASTRUCTURE MANAGEMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12536291
PLATFORM BOOT PATH FAULT DETECTION ISOLATION AND REMEDIATION PROTOCOL
2y 5m to grant Granted Jan 27, 2026
Patent 12393641
METHODS FOR UTILIZING SOLVER HARDWARE FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
50%
Grant Probability
76%
With Interview (+25.8%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 509 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month