Prosecution Insights
Last updated: April 19, 2026
Application No. 17/577,330

DEEP LEARNING GUIDE DEVICE AND METHOD

Final Rejection §103
Filed
Jan 17, 2022
Examiner
LI, LIANG Y
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Oriental Mind (Wuhan) Computing Technology Co. Ltd.
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
167 granted / 273 resolved
+6.2% vs TC avg
Strong +69% interview lift
Without
With
+69.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
26 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 273 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to pending claims 1-2, 4-10, 12-17 filed 9/17/2025. Claim Objections In the amended portions of claims 1, 4, “the graphical operation interface” should be amended to “a graphical operation interface” due to missing antecedent basis. Appropriate corrections are required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-7, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Haemel (US 20200027210 A1) in view of Srinivasan (US 20180300653 A1). For claim 1, Haemel discloses: a deep learning guide device, comprising a memory, a processor, and a computer program which is stored in the memory and can be operated on the processor, wherein the processor implements following steps when executing the computer program (fig.8: 804, 806; 0103 gives hardware overview): receiving a content of the data set uploaded by a user (fig.6A, 0086 shows customer initiation of training a deep learning model based on customer dataset 606, hence, user upload of data for training to the model training platform (see fig.2:104 for schematic of training platform); the data being generated locally by devices at a facility, e.g., a medical facility, hence, the location of the data determined for communication to the training platform), and displaying the content of the data set in a graphical interface (fig.6A-B: training data is displayed for annotation), wherein the data set is applied for model training (fig.6A, 0086 as above); submitting a data annotation operation request when receiving a data annotation operation to the content of the data set performed by the user on the graphical interface (fig.6A-B, 0087-89: data annotation operations are received by the GUI to the training data and submitted to the backend for processing and establishing a ground truth); obtaining data annotation information according to the data annotation operation request (0088-89: data annotation information such as ground truth data is obtained for training), and storing the data annotation information (fig.6C, 0091: storing in data store); and performing the model training based on the data set and the data annotation information, generating a training model (0089: training and deploying model); and performing an online prediction service based on a deployment operation input by the user on the graphic operation interface (Haemel 0098, fig.7:B712, fig.5:B512: a trained model is deployed to generate processed inference data, the deployment occurring in a containerized cloud environment, see 0031; the deployment including a GUI, see 0045-46, 0031; see 0053: receiving inference requests via a GUI); performing a prediction based on target online prediction service network request address information selected by the user on the graphical operation interface (Haemel 0031, 0045-46: user selection of pipeline elements on the GUI, hence, selecting information reflective of machine learning models and pipeline elements, such as stored in model registry (fig.1:124, fig.2:206, 0084), hence, pipeline composition selection elements being address information), and displaying a prediction result (Haemel 0053, 0055). Haemel does not disclose determining a storage address of a data set in a preset storage area when receiving the data set from the user; storing annotation information to the preset storage area corresponding to the storage address; and generating a deep learning result evaluation report, storing the training model and the deep learning result evaluation report in the preset storage area; displaying an online predictive service network request address; Srinivasan discloses: determining a storage address of a data set in a preset storage area when receiving the data set from the user (fig.2C, 0041-43 discloses a container where training data is stored and mounted, the model is stored and the output trained model is stored, see also 0059-63 describing instantiation of a training container containing training, model, and parameter data, hence, the address of the container is determined for storing data set for training); storing annotation information to the preset storage area corresponding to the storage address (ibid: additional parameters needed for training are stored in the container, hence, combination with Haemel yielding application to annotation information); and generating a deep learning result evaluation report, storing the training model and the deep learning result evaluation report in the preset storage area (0063: the output model and training log evaluation report including hyperparameters such as number of iterations, training log, version number, classification labels, etc. is stored in the output directory in the same container preset storage area); displaying an online prediction service network request address (Srinivasan 0085 contemplates deploying a serving container to issue predictions via user request (see 0065-67 disclosing deployment of a serving container), the serving container communicating with remote computing devise to receive input signals for issuing predictions, with 0093-94 contemplating network communication via a web browser and over the internet as deployed via a GUI device, a web browser being understood by one of ordinary skill in the art to display the address of the communication source being accessed, hence, displaying a network address of an online prediction service that hosts the serving container). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the device of Haemel by incorporating the container-based training technique of Srinivasan. Both concern the art of machine learning training and serving platforms and the incorporation would have, according to Srinivasan, allow for a platform that would automate routine machine learning tasks for those without specific expertise (0003). For claim 2, Haemel modified by Srinivasan discloses the device of claim 1, as described above. Haemel further discloses: wherein after obtaining the data annotation information according to the data annotation operation request, the processor further implements following steps: displaying the data annotation information and the data set (fig.6B, 0088: adjusting and fine-tuning the annotations / auto-annotations, hence, user edits are displayed in the interactive environment); acquiring deep learning scene information and training mode information selected by the user based on a graphical operation interface (0045-46: GUI for selecting a deployment, training, retraining pipeline scene or environment information, hence, application and application arrangement information constitute deep learning scene information and training mode information); getting training operation basic information input by the user based on the graphical operation interface (ibid: various applications, parameters, parameters are set by the user via the GUI, hence, parameters and constructs constitute training operation basic information); and creating training operation creation information according to the deep learning scene information, the training mode information and the training operation basic information (0046-47: pipeline manager and application orchestration system use the various user-submitted parameters to generate containers, hence, container information for coordinating application components are created); creating a model training operation according to the training operation creation information (0047-48: training operations are created during deployment of the containers based on the container deployment information), and performing the model training operation to generate the training model (0089) and the deep learning result evaluation report (0053, 0055). For claim 4, Haemel discloses: a deep learning guide method, comprising following steps: receiving content of a data set uploaded by a user (fig.6A, 0086 shows customer initiation of training a deep learning model based on customer dataset 606, hence, user upload of data for training to the model training platform (see fig.2:104 for schematic of training platform); the data being generated locally by devices at a facility, e.g., a medical facility, hence, the location of the data determined for communication to the training platform), and displaying the content of the data set in a graphical interface (fig.6A-B: training data is displayed for annotation), where the data set is applied for model training (fig.6A, 0086 as above); receiving a data annotation operation to the content of the data set performed by the user on the graphical interface (fig.6A-B, 0087-89: data annotation operations are received by the GUI to the training data and submitted to the backend for processing and establishing a ground truth), obtaining data annotation information according to the data annotation operation (0088-89: data annotation information such as ground truth data is obtained for training), and storing the data annotation information (fig.6C, 0091: storing in data store); performing the model training based on the data set and the data annotation information, generating a training model (0089: training and deploying model); and performing an online prediction service based on a deployment operation input by the user on the graphic operation interface (Haemel 0098, fig.7:B712, fig.5:B512: a trained model is deployed to generate processed inference data, the deployment occurring in a containerized cloud environment, see 0031; the deployment including a GUI, see 0045-46, 0031; see 0053: receiving inference requests via a GUI); performing a prediction based on target online prediction service network request address information selected by the user on the graphical operation interface (Haemel 0031, 0045-46: user selection of pipeline elements on the GUI, hence, selecting information reflective of machine learning models and pipeline elements, such as stored in model registry (fig.1:124, fig.2:206, 0084), hence, pipeline composition selection elements being address information), and displaying a prediction result (Haemel 0053, 0055). Haemel does not disclose determining a storage address of a data set in a preset storage area when receiving the data set from the user; storing annotation information to the preset storage area corresponding to the storage address; and generating a deep learning result evaluation report, storing the training model and the deep learning result evaluation report in the preset storage area; displaying an online predictive service network request address. Srinivasan discloses: determining a storage address of a data set in a preset storage area when receiving the data set from the user (fig.2C, 0041-43 discloses a container where training data is stored and mounted, the model is stored and the output trained model is stored, see also 0059-63 describing instantiation of a training container containing training, model, and parameter data, hence, the address of the container is determined for storing data set for training); storing annotation information to the preset storage area corresponding to the storage address (ibid: additional parameters needed for training are stored in the container, hence, combination with Haemel yielding application to annotation information); and generating a deep learning result evaluation report, storing the training model and the deep learning result evaluation report in the preset storage area (0063: the output model and training log evaluation report including hyperparameters such as number of iterations, training log, version number, classification labels, etc. is stored in the output directory in the same container preset storage area); displaying an online prediction service network request address (Srinivasan 0085 contemplates deploying a serving container to issue predictions via user request (see 0065-67 disclosing deployment of a serving container), the serving container communicating with remote computing devise to receive input signals for issuing predictions, with 0093-94 contemplating network communication via a web browser and over the internet as deployed via a GUI device, a web browser being understood by one of ordinary skill in the art to display the address of the communication source being accessed, hence, displaying a network address of an online prediction service that hosts the serving container). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the device of Haemel by incorporating the container-based training technique of Srinivasan. Both concern the art of machine learning training and serving platforms and the incorporation would have, according to Srinivasan, allow for a platform that would automate routine machine learning tasks for those without specific expertise (0003). For claim 5, Haemel modified by Srinivasan discloses the device of claim 4, as described above. Haemel further discloses: wherein the step of performing the model training based on the data set and the data annotation information, generating the training model and the deep learning result evaluation report, specifically comprises: obtaining deep learning scene information and training mode information selected by the user based on a graphical operation interface (0045-46: GUI for selecting a deployment, training, retraining pipeline scene or environment information, hence, application and application arrangement information constitute deep learning scene information and training mode information); obtaining training operation basic information input by the user based on the graphical operation interface (ibid: various applications, parameters, parameters are set by the user via the GUI, hence, parameters and constructs constitute training operation basic information); assembling training operation creation information according to the deep learning scene information, the training mode information and the training operation basic information, and submitting the training operation creation information (0046-47: pipeline manager and application orchestration system use the various user-submitted parameters to generate containers, hence, container information for coordinating application components are created, the container information submitted to the container creation process for creating containers); completing the model training according to the training operation creation information (0024-26, 0052, 0084: completing training and storing trained models), and feeding back a training result (ibid: new models are generated); and creating a model training operation according to the training operation creation information (0047-48: training operations are created during deployment of the containers based on the container deployment information), and performing the model training operation to generate the training model (0089) and the deep learning result evaluation report (0053, 0055). For claim 6, Haemel modified by Srinivasan discloses the device of claim 4, as described above. Haemel modified by Srinivasan further discloses: wherein the step of determining the storage address in the preset storage area when receiving the content of the data set uploaded by the user, specifically comprises: receiving the content of the data set uploaded by the user (Haemel fig.6A, 0086 shows customer initiation of training a deep learning model based on customer dataset 606, hence, user upload of data for training to the model training platform and receiving by the platform of the data set (see fig.2:104 for schematic of training platform); the data being generated locally by devices at a facility, e.g., a medical facility, hence, the location of the data determined for communication to the training platform), and obtaining the storage address of the data set in the preset storage area (Srinivasan fig.2C, 0041-43 discloses a container and file system where training data is stored and mounted, hence, obtaining address in the storage area). For claim 7, Haemel modified by Srinivasan discloses the device of claim 4, as described above. Haemel modified by Srinivasan further discloses: wherein the step of receiving the data annotation operation to the content of the data set performed by the user on the graphical interface, obtaining the data annotation information according to the data annotation operation, specifically comprises: generating a data annotation operation request when receiving the data annotation operation to the content of the data set performed by the user on the graphical interface (Haemel fig.6A-B, 0087-89: data annotation operations are received by the GUI to the training data and submitted to the backend for processing and establishing a ground truth); obtaining the data annotation information according to the data annotation operation request (Haemel 0088-89: data annotation information such as ground truth data is obtained for training); and storing the data annotation information (Haemel fig.6C, 0091: storing in data store) to the preset storage area corresponding to the storage address (Srinivasan fig.2C, 0041-43, 0059-63: additional parameters needed for training are stored in the container, hence, combination with Haemel yielding application to annotation information). Claim 17 recites a device analogous to the method of claim 4 and hence is rejected under the same rationale. Furthermore, Haemel discloses: an electronic device, comprising a storage memory, a processor, and a computer program stored in the memory, wherein the processor performs the computer program to implement the deep learning guide method (fig.8: 804, 806; 0103 gives hardware overview). Claim(s) 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Haemel (US 20200027210 A1) in view of Srinivasan (US 20180300653 A1) in view of An (US 20200327319 A1). For claim 8, Haemel modified by Srinivasan discloses the device of claim 7, as described above. Haemel modified by Srinivasan further discloses: the step of obtaining the data annotation information according to the data annotation operation request specifically comprises: obtaining the content of the data set according to the storage address, and automatically detecting the content of the data set (Haemel fig.6A, 0087: data is obtained for AI-assisted annotation to generate suggested annotations, hence, detecting data set images and structures for annotating; Srinivasan fig.2C, 0041-43, 0059-63 discloses a container-based mounting for training data at storage address); when a detection result is that there is annotated data information in the data set, checking the annotated data information (Haemel 0024: checking the data annotations by a human after detecting the data for annotation); performing data annotation on the content of the data set according to the data annotation operation request, obtaining the data annotation information, and storing the data annotation information to the data set (fig.6B, 0088: annotating operations are performed on the data according to requests received via the GUI from the user); displaying the data annotation information and the data set (fig.6B). Haemel modified by Srinivasan does not disclose that data annotations are performed when the detection result is that there is no annotated data information in the data set. An discloses: when the detection result is that there is no annotated data information in the data set (0081-82: annotator determining missing annotation error and creating annotations). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the device of Haemel modified by Srinivasan by incorporating the annotation detection technique of An. Both concern the art of machine learning training and annotation platforms and the incorporation would have, according to An, allow performance of quality checks to make sure annotations are of sufficient quality. For claim 9, Haemel modified by Srinivasan modified by An discloses the device of claim 8, as described above. Haemel modified by Srinivasan further discloses: wherein after the step of displaying the data annotation information and the data set, further comprises: determining whether a result of the data annotation performed on the data set meets all expectations of the user (Haemel fig.6B, 0088: fine-tuning operations by the user constitutes determining whether the result meets expectations and performing appropriate adjustments), and determining whether the data set uploaded by the user all meets a data set agreed requirement (An fig.2, 70-71: a determination is made as to whether a data set meets all the requirements necessary for further processing and diverted to an expert if not, hence, whether a data set meets or agrees with preset existence requirements or meets requirements agreed upon for the data set for further processing, hence, data set agreed requirements); when one of determined results is no, receiving manual annotation on the data set performed by the user (Haemel fig.6B, 0088). For claim 10, Haemel modified by Srinivasan modified by An discloses the device of claim 9, as described above. Haemel modified by Srinivasan further discloses: wherein the step of receiving the manual annotation on the data set performed by the user, comprises obtaining secondary manual data annotation information inputted by the user based on a graphical operation interface (Haemel fig.6B, 0088: many secondary annotation request are received by the user during the iterative fine-tuning process); storing the secondary manual data annotation information to the data set (Haemel 0089: associating dataset with the ground truth data for use in training during training), and feeding back the secondary manual data annotation information to the graphical operation interface (Haemel fig.6B, 0088: secondary annotation data is fed back to the display during user fine-tuning operations). Claim(s) 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Haemel (US 20200027210 A1) in view of Srinivasan (US 20180300653 A1) in view of Chen (US 20220394084 A1). For claim 12, Haemel modified by Srinivasan discloses the device of claim 11, as described above. Haemel modified by Srinivasan further discloses: wherein the step of performing the online prediction service based on the deployment operation input by the user on the graphic operation interface, and displaying the online prediction service network request address, comprises: obtaining deployment operation basic information inputted by the user based on the graphical interface (Haemel fig. 5, 0076, with 0046 disclosing GUI, fig.7:B712, 0098 disclosing deployment of a trained model, fig.7:B702-704, 0092-93 disclosing selection of a trained model); obtaining training model information for deploying the online prediction service selected by the user based on the graphical interface (Haemel 0078: obtain container comprising trained model for deploying the prediction service based on GUI parameters, fig.7:0092-93); creating deployment operation creation information according to the deployment operation basic information and the training model information (Haemel fig.5, 0077: instantiating containers for deployment based on user specifications, hence, creation instantiation information); completing an online prediction service deployment according to the deployment operation creation information (Haemel fig.5:B504 ,0081: completing instantiation), creating an online prediction service deployment operation according to deployment operation create information and performing it (Haemel fig.5, B512, 0081: creating a cloud or online deployment operation for processing data); and displaying the online prediction service network request address (Srinivasan 0085, 0093-94: the address is returned for display in the web browser). Haemel modified by Srinivasan modified by An does not disclose: returning a successfully deployed online prediction service network address, feeding back the online prediction service network address. Chen discloses: returning a successfully deployed online prediction service network address (0097: returning the service address to the server initiator, either directly or via third party), feeding back the online prediction service network address (0097: returning the service address to the service initiator, hence, feeding back as a response to the initiator request, he address being fed back to the browser application of Haemel, hence, providing feedback of the address). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the device of Haemel modified by Srinivasan by incorporating the server deployment feedback technique of Chen. Both concern the art of network deployment platforms and the incorporation would have, according to Chen, give a user or requestor indication of how to use or access a deployed service. For claim 13, Haemel modified by Srinivasan discloses the device of claim 12, as described above. Haemel modified by Srinivasan further discloses: wherein the step of performing the prediction based on the target online prediction service network request address information selected by the user on the graphical operation interface, and displaying the prediction result, comprises: obtaining the target online prediction service network request address information selected by the user based on the graphical operation interface (Srinivasan 0093-94: the address selected by the user in the web address is returned for user communication for the prediction server, see 0085; see also Haemel 0043-55: GUI for displaying results); obtaining prediction data information input by the user based on the graphical operation interface (ibid: the prediction input is received from the user via the GUI); creating prediction request information based on the target online prediction service network request address information and the prediction data information (ibid: the prediction request is created for transmission to the serving container); calling a prediction server to complete the prediction according to the prediction request information, and feeding back the prediction result (ibid: the prediction result is calculated via the serving container and returned); and displaying the prediction result (ibid: the result is displayed via the web browser or GUI, such as a JSON format (0031, 0066)). For claim 14, Haemel modified by Srinivasan discloses the device of claim 13, as described above. Haemel modified by Srinivasan further discloses: wherein completing the prediction according to the prediction request information, and feeding back the prediction result, comprises: performing the prediction after receiving the prediction request data information (Srinivasan 0085, Haemel 0052-55); transferring requested data information to complete the prediction (ibid: the data information is transmitted to the prediction server); and returning the prediction result for displaying when the prediction is completed (ibid). For claim 15, Haemel modified by Srinivasan discloses the device of claim 13, as described above. Haemel modified by Srinivasan further discloses: wherein the step of calling the prediction server to complete the prediction according to the prediction request information and feeding back the prediction result, comprises: finding a corresponding prediction service according to the prediction service network request address in requested data information (Srinivasan 0093-94 discloses various network protocols including internet, web browser for accessing prediction server, hence, the serving container is found via its network address for prediction, the internet address being a protocol the encodes requested data information); calling the prediction server to performing the prediction on the requested data (Srinivasan 0085, 0093-94, Haemel 0052-55: calling prediction server on the data); and returning the prediction result after the prediction is successful (ibid). For claim 16, Haemel modified by Srinivasan discloses the device of claim 13, as described above. Haemel modified by Srinivasan further discloses: wherein displaying the prediction result comprises: displaying the prediction result in a chart format, or displaying the prediction result in a JSON format (Srinivasan 0031, 0066). Response to Arguments In the remarks, Applicant argues: 1. The art of record does not disclose limitations directed to “performing an online prediction service …” because Haemel’s disclosure in 0054 of GPU-accelerated instances in the cloud does not disclose the system dynamically configuring and staring an online prediction service interface accessible from the outside according to the deployment parameters set by the user through the graphical interface, and feeds back the network address (such as URL) of this interface to the user for invocation and use. Even if Haemel relates to a cloud 226, there is no description about a process of the “prediction based on target online prediction service network request address information”. --- Examiner appreciates applicant’s explanation of invention as involving a message passing including passing a network address to the user. However, as described in the rejection above, Haemel 0045-46 describe creation of a pipeline via a GUI that would include selectable pipe elements including models from the model registry, see fig.2:206, fig.1:124, which may be hosted on a cloud platform, see 0023. Hence, selection of models for a pipeline for deployment constitutes selection of elements referencing or pertaining to the addresses of elements stored in the cloud, hence, address information, as claimed. Hence, further clarification of “address information” would be needed to distinguish from art of record. 2. Hamel is silent on displaying an online prediction service network request address and displaying a prediction result. --- Examiner respectfully disagrees. The cited portions of Srinivasan are relied upon for displaying of an online prediction service network request address, hence, Haemel’s silence is moot. Furthermore, Haemel 0053, 0055 as cited above disclose display of prediction results, including various visualizations and segmentations. 3. Srinivasan does not disclose limitations directed to displaying a online prediction service network address and a prediction result. --- Examiner respectfully disagrees. Haemel is relied upon for disclosure of display of prediction result and hence Srinivasan’s silence is moot. Furthermore, Examiner submits that the relied-upon disclosure of Srinivasan of a web-browser implementation of the access to an inference service constitutes display of online prediction service address. One of ordinary skill in the art at the time of filing would understand that access via a web browser, as plainly understood, necessarily entails access to GUI elements such as web address displays, for example, via the address bar, for accessing, browsing, and navigating the web. Hence, Srinivasan discloses the display of the claimed network address. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bezgachev ("How to deploy Machine Learning models with TensorFlow", published 6/24/2017) p.36-37 disclosing deployment and accessing of containers for inference via a returned address information is cited as relevant. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIANG LI whose telephone number is (303)297-4263. The examiner can normally be reached Mon-Fri 9-12p, 3-11p MT (11-2p, 5-1a ET). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Jennifer Welch can be reached on (571)272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The examiner is available for interviews Mon-Fri 6-11a, 2-7p MT (8-1p, 4-9p ET). /LIANG LI/ Primary examiner AU 2143
Read full office action

Prosecution Timeline

Jan 17, 2022
Application Filed
Jun 14, 2025
Non-Final Rejection — §103
Sep 17, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596463
METHOD AND APPARATUS FOR IMAGE-BASED NAVIGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12585716
INTELLIGENT RECOMMENDATION METHOD AND APPARATUS, MODEL TRAINING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585375
GENERATING SNAPPING GUIDE LINES FROM OBJECTS IN A DESIGNATED REGION
2y 5m to grant Granted Mar 24, 2026
Patent 12580000
MULTITRACK EFFECT VISUALIZATION AND INTERACTION FOR TEXT-BASED VIDEO EDITING
2y 5m to grant Granted Mar 17, 2026
Patent 12561566
NEURAL NETWORK LAYER FOLDING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+69.1%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 273 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month