Prosecution Insights
Last updated: April 19, 2026
Application No. 18/346,695

CHAT APPLICATION FOR VIDEO CONTENT CREATION

Non-Final OA §103
Filed
Jul 03, 2023
Examiner
ANDERSON, BRODERICK C
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Lemon Inc.
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
190 granted / 258 resolved
+18.6% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
20 currently pending
Career history
278
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Request for Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/12/2026 has been entered. The response filed on 3/12/2026 has been entered and made of record. Claims 1, 11, and 19 are amended. Claims 1-20 are pending. The previous rejections of 1-20 under 35 USC 103 are maintained, but have been updated as necessitated by amendment. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Drawings The drawings filed 7/3/2023 were accepted. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9 and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ganz et al (US 20200066261 A1; filed 8/22/2018) in view of Patterson et al (US 20210272599 A1; filed 3/2/2021) and Baeuml et al (US 20230074406 A1; filed 11/22/2021). With regard to claim 1, Ganz et al discloses A computing system for… content creation, comprising: a processor (Ganz et al, paragraph 89: “The example computing device 1002 as illustrated includes a processing system 1004, one or more computer-readable media 1006, and one or more I/O interface 1008 that are communicatively coupled, one to another.”); and a memory storing… and a chat application (Ganz et al, paragraph 4: “a messaging interface or chatbot”) that, in response to execution by the processor, cause the processor to (Ganz et al, paragraph 89: “The example computing device 1002 as illustrated includes a processing system 1004, one or more computer-readable media 1006, and one or more I/O interface 1008 that are communicatively coupled, one to another.”): in a chat conversation with a user in real-time (Ganz et al, paragraph 3: “Techniques for conversational image editing and enhancement in a digital media environment;” paragraph 27: “The image manipulation system generates a natural language suggestion such as, “I see your photo looks a bit dark. Can I brighten it up for you?” The user responds, “Sure, go ahead.” The image manipulation system then changes the lighting of the digital image to increase the aesthetic attribute score associated with lighting, such as to bring the aesthetic attribute score above the threshold value. Then, the image manipulation generates another suggestion including an option to edit the digital image, such as, “Now that your photo is brighter, how about we remove some of the blurriness?” In this way, the image manipulation system builds upon previous inputs from the user and previous edits to the digital image, thus providing a more aesthetically pleasing image in an easy-to-use, conversational manner;” the responding to user input in a conversational manner is interpreted as being in “real-time”), receive communication including a command from the user for interacting with… content (Ganz et al, paragraph 6: “The natural language conversation techniques described herein provide users with efficient digital image editing and enhancement options while using natural language commands in applications that users are already familiar with, and without having to learn a complex digital imaging user interface.”); provide the command directly to… in real-time as the command is received (Ganz et al, Fig. 2: The flow chart shows the natural language inputs 210 being received by the conversation module 208 and responding with the edited image(s) or natural language outputs), analyze the command and generate a natural language response and a recommended action as an immediate response to the received commend, the recommended action being an action to implement on the video content based at least on the analyzed command (Ganz et al, Fig. 6A-6B: multiple natural language responses are shown, such as “Creating some combinations for you;” multiple actions are implemented based on the user inputs, such as the sharpened images in Fig. 6A; paragraph 4: “The computing device, as part of the conversation, provides feedback to the user that includes edits to the digital image based on the series of inputs, which may also include a set of edited variations to the digital image from which the user may select a preferred edit;” While Ganz doesn’t disclose how much time passes before responses, examiner is interpreting the conversational responses such as through the chatbot (see Ganz, paragraph 27, 54-55) as being in real-time and immediate. Paragraph 55 also describes the response as being made “once a user’s intention is determined based on the natural language input,” which seems to mean the response is made in response to the input with no added delay); and implement the recommended action on the… content based at least on the analyzed command (Ganz et al, paragraph 4: “In one example, the computing device receives a series of inputs from the user. The system may then perform image editing operations as well as suggest other image editing operations as part of the natural language conversation.”). However, Ganz et al does not disclose video content creation… a large language model… video content… use the large language model… video content. Patterson et al teaches video content creation…video content… video content (Patterson et al, abstract: “automatic video processing that employ machine learning models to process input video and understand user video content… music selection and dialog based editing can likewise be automated via machine learning models”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al and Patterson et al such that the content being edited is video content. This would have enabled a user to edit a video without having to learn how to manually edit a video (Patterson et al, paragraph 3: “the art of video post-production (a.k.a. video “editing”) is viewed as extremely difficult to learn, and most conventional tools for this task are designed for, and exist only on, desktop computers based on the perception that desktop processing power is required for such editing”). Baeuml et al teaches a large language model… the large language model (Baeuml et al, paragraph 5: “process the set of assistant outputs and context of the dialog session to generate a set of modified assistant outputs using one or more large language model (LLM) outputs generated using an LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that an LLM is used to analyze and respond to the user input. This would have enabled the responses to be improved (Baeuml et al, paragraph 16: “the modified assistant outputs provided by the automated assistant may better resonate with the user of the client device;” the modified assistant outputs are the outputs of the LLM, as described in paragraphs 5-9 and the abstract of Baeuml et al). With regards to claim 2, which depends on claim 1, Ganz et al discloses engage in navigational conversations to guide the user to use a tool on a user interface of a… editing application to edit the… content (Ganz et al, paragraph 4: “The system may then perform image editing operations as well as suggest other image editing operations as part of the natural language conversation;” the “tools” are interpreted as the other functions that are being used to edit the video in the interface). However, Ganz et al does not disclose the large language model is trained to… video editing… video content. Patterson et al teaches video editing… video content (Patterson et al, abstract: “automatic video processing that employ machine learning models to process input video and understand user video content… music selection and dialog based editing can likewise be automated via machine learning models”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the content being edited is video content. This would have enabled a user to edit a video without having to learn how to manually edit a video (Patterson et al, paragraph 3: “the art of video post-production (a.k.a. video “editing”) is viewed as extremely difficult to learn, and most conventional tools for this task are designed for, and exist only on, desktop computers based on the perception that desktop processing power is required for such editing”). Baeuml et al teaches the large language model is trained to… (Baeuml et al, paragraph 66: “outputs can be utilized to modify or re-train the LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the LLM is trained to perform the desired tasks. This would have enabled the model to be of high quality (Baeuml et al, paragraph 66: “Accordingly, in these implementations, the one or more corresponding LLM outputs generated using the one or more LLMs can be curated by the human reviewer to ensure quality of the one or more corresponding LLM outputs. Moreover, any non-discarded, re-indexed and/or curated LLM outputs can be utilized to modify or re-train the LLM”). With regards to claim 3, which depends on claim 1, Ganz et al discloses engage in editing-focused conversations to suggest to the user one or more proposed edits to the… content (Ganz et al, paragraph 4: “The system may then perform image editing operations as well as suggest other image editing operations as part of the natural language conversation”). However, Ganz et al does not disclose the large language model is trained to… video content. Patterson et al teaches video content (Patterson et al, abstract: “automatic video processing that employ machine learning models to process input video and understand user video content… music selection and dialog based editing can likewise be automated via machine learning models”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the content being edited is video content. This would have enabled a user to edit a video without having to learn how to manually edit a video (Patterson et al, paragraph 3: “the art of video post-production (a.k.a. video “editing”) is viewed as extremely difficult to learn, and most conventional tools for this task are designed for, and exist only on, desktop computers based on the perception that desktop processing power is required for such editing”). Baeuml et al teaches the large language model is trained to… (Baeuml et al, paragraph 66: “outputs can be utilized to modify or re-train the LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the LLM is trained to perform the desired tasks. This would have enabled the model to be of high quality (Baeuml et al, paragraph 66: “Accordingly, in these implementations, the one or more corresponding LLM outputs generated using the one or more LLMs can be curated by the human reviewer to ensure quality of the one or more corresponding LLM outputs. Moreover, any non-discarded, re-indexed and/or curated LLM outputs can be utilized to modify or re-train the LLM”). With regards to claim 4, which depends on claim 1, Ganz et al discloses engage in explorational conversations to suggest ideas for future… content based on the… content (Ganz et al, paragraph 4: “The system may then perform image editing operations as well as suggest other image editing operations as part of the natural language conversation;” paragraph 25: “The image manipulation system may also output an additional natural language suggestion to further edit the image utilizing the object recognition techniques, such as “Would you like to remove the red eye as well?” The additional natural language suggestion provides additional options to edit the digital image and builds upon the previous edits to the digital image and the progressing conversation with the user;” the future content can be interpreted as just the future edited content objects). However, Ganz et al does not disclose the large language model is trained to… video content. Patterson et al teaches video content (Patterson et al, abstract: “automatic video processing that employ machine learning models to process input video and understand user video content… music selection and dialog based editing can likewise be automated via machine learning models”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the content being edited is video content. This would have enabled a user to edit a video without having to learn how to manually edit a video (Patterson et al, paragraph 3: “the art of video post-production (a.k.a. video “editing”) is viewed as extremely difficult to learn, and most conventional tools for this task are designed for, and exist only on, desktop computers based on the perception that desktop processing power is required for such editing”). Baeuml et al teaches the large language model is trained to… (Baeuml et al, paragraph 66: “outputs can be utilized to modify or re-train the LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the LLM is trained to perform the desired tasks. This would have enabled the model to be of high quality (Baeuml et al, paragraph 66: “Accordingly, in these implementations, the one or more corresponding LLM outputs generated using the one or more LLMs can be curated by the human reviewer to ensure quality of the one or more corresponding LLM outputs. Moreover, any non-discarded, re-indexed and/or curated LLM outputs can be utilized to modify or re-train the LLM”). With regards to claim 5, which depends on claim 1, Ganz et al discloses a prompt manager configured to: process the communication from the user (Ganz et al, paragraph 39: “the natural language I/O module 116 interprets natural language commands and questions from the user”); identify the command from the user (Ganz et al, paragraph 39: “the natural language I/O module 116 interprets natural language commands and questions from the user (e.g., the user interaction 110)”); and identify an intent of the user (Ganz et al, paragraph 39: “the natural language I/O module 116 interprets natural language commands and questions from the user (e.g., the user interaction 110), and maps the natural language commands and questions to canonical intentions related to the digital image”), wherein the identified command and identified intent are received as input by the (Ganz et al, paragraph 84: “Then, a natural language suggestion to edit the digital image is generated based on the mapped canonical intention and the generated aesthetic attribute score (block 908).”). Baeuml et al teaches the large language model (Baeuml et al, paragraph 66: “outputs can be utilized to modify or re-train the LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the LLM is trained to perform the desired tasks. This would have enabled the model to be of high quality (Baeuml et al, paragraph 66: “Accordingly, in these implementations, the one or more corresponding LLM outputs generated using the one or more LLMs can be curated by the human reviewer to ensure quality of the one or more corresponding LLM outputs. Moreover, any non-discarded, re-indexed and/or curated LLM outputs can be utilized to modify or re-train the LLM”). With regards to claim 6, which depends on claim 1, Ganz et al discloses generates the natural language response and the recommended action for the… content based on at least one selected from the group of: the… content being created, a geo-location of the user, and content creation goals of the user (Ganz et al, paragraph 84: “a natural language suggestion to edit the digital image is generated based on the mapped canonical intention and the generated aesthetic attribute score (block 908);” also see Fig. 9 for the flow chart including block 908; the mapped canonical intention is the interpreted goal of the user). However, Ganz et al does not disclose the large language model… video content… video content. Patterson et al teaches video content… video content (Patterson et al, abstract: “automatic video processing that employ machine learning models to process input video and understand user video content… music selection and dialog based editing can likewise be automated via machine learning models”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the content being edited is video content. This would have enabled a user to edit a video without having to learn how to manually edit a video (Patterson et al, paragraph 3: “the art of video post-production (a.k.a. video “editing”) is viewed as extremely difficult to learn, and most conventional tools for this task are designed for, and exist only on, desktop computers based on the perception that desktop processing power is required for such editing”). Baeuml et al teaches a large language model (Baeuml et al, paragraph 5: “process the set of assistant outputs and context of the dialog session to generate a set of modified assistant outputs using one or more large language model (LLM) outputs generated using an LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that an LLM is used to analyze and respond to the user input. This would have enabled the responses to be improved (Baeuml et al, paragraph 16: “the modified assistant outputs provided by the automated assistant may better resonate with the user of the client device;” the modified assistant outputs are the outputs of the LLM, as described in paragraphs 5-9 and the abstract of Baeuml et al). With regards to claim 7, which depends on claim 6, Ganz et al discloses metadata of the… content is generated (Ganz et al, paragraph 23: “The image manipulation system generates aesthetic attribute scores for aesthetic attributes of the digital image.” The aesthetic attribute scores are data about the content data, so it is interpreted as metadata); and the… metadata is received as input by the… [conversation model] (Ganz et al, paragraph 5: “a natural language suggestion to edit the digital image based on the aesthetic attribute scores as part of the natural language conversation. The aesthetic attribute scores correspond to aesthetic attributes of the digital image, such as color harmony, content, depth of field, and so forth”). However, Ganz et al does not disclose video… video content… video… the large language model. Patterson et al teaches video… video content… video… (Patterson et al, abstract: “automatic video processing that employ machine learning models to process input video and understand user video content… music selection and dialog based editing can likewise be automated via machine learning models”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the content being edited is video content. This would have enabled a user to edit a video without having to learn how to manually edit a video (Patterson et al, paragraph 3: “the art of video post-production (a.k.a. video “editing”) is viewed as extremely difficult to learn, and most conventional tools for this task are designed for, and exist only on, desktop computers based on the perception that desktop processing power is required for such editing”). Baeuml et al teaches the large language model (Baeuml et al, paragraph 5: “process the set of assistant outputs and context of the dialog session to generate a set of modified assistant outputs using one or more large language model (LLM) outputs generated using an LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that an LLM is used to analyze and respond to the user input. This would have enabled the responses to be improved (Baeuml et al, paragraph 16: “the modified assistant outputs provided by the automated assistant may better resonate with the user of the client device;” the modified assistant outputs are the outputs of the LLM, as described in paragraphs 5-9 and the abstract of Baeuml et al). With regards to claim 8, which depends on claim 7, Ganz et al does not disclose wherein the video metadata comprises textual descriptions of visual and/or audio content of the video content. However, Patterson et al teaches wherein the video metadata comprises textual descriptions of visual and/or audio content of the video content (Patterson et al, Paragraph 154: “The process can continue with extraction of features from the input clips using, for example, a convolutional neural network. In one example, the features extracted by the CNN from FIG. 30 are: “clouds, direct sun/sunny, natural, man-made, open area, far-away horizon, Hazy, Serene, Wide, Full Shot, Geometric, Shallow DOF, Deep DOF, Warm (tones), Face.” In a further step, the input clip features are mapped to soundtrack tags using, in one example, a manually created dictionary generated by a film expert.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the metadata includes textual descriptions of the media content. This would have enabled the invention to match the video content with other content (Patterson et al, paragraph 513: “Some embodiments extend these operations to project music into a semantic embedding space, which yields matching between the video and music in the semantic space. The matches can become the source of recommended soundtracks for user source video based on the video's visual content”). With regards to claim 9, which depends on claim 1, Ganz et al discloses the chat application (Ganz et al, paragraph 4: “a messaging interface or chatbot”). However, Ganz et al does not disclose the… application evaluates whether the video content is ready to be published; and responsive to determining that the video content is ready to be published, the… application guides the user to complete a content publishing step. Patterson et al teaches the… application evaluates whether the video content is ready to be published (Patterson et al, paragraph 4: “In further examples, the editing application is configured to build a “rough-cut” video from a given user input source. The system can then accept and/or employ user input on the rough-cut video to further refine and create a final output video… the rough-cut implementations can use additional machine learning models to automatically generate “fine tune” edits that may be used as selections to present to users in the application, and/or to automatically apply to a rough-cut version to yield a final version of the video.”); and responsive to determining that the video content is ready to be published, the… application guides the user to complete a content publishing step (Patterson et al, paragraph 57: “Once output video is generated by the processing system users can consume the generated video clip. In some embodiments, the users can access functionality provided by the interface component 204 to share generated video on a publication platform”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that the content can be shared using the application after editing. This would have enabled a user to share the edited video with others (Patterson et al, paragraph 57: “the user can access the processing system 200 and designate a generated video as publicly available and/or publish the generated video to one or more users”). Claims 11-17 recite substantially similar limitations to claims 1-7 respectively and are thus rejected along the same rationales. Claim 18 recites substantially similar limitations to claim 9 and is thus rejected along the same rationale. Claim 19 recites substantially similar limitations to claim 1 and is thus rejected along the same rationale. Claim 20 recites substantially similar limitations to claim 1 and is thus rejected along the same rationale. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ganz et al in view of Patterson et al and Baeuml et al, and further in view of Chung et al (US20210027065A1; filed 7/26/2019). With regards to claim 10, which depends on claim 1, Ganz et al does not disclose wherein performance analytics data from the video content is used to train the large language model. Baeuml et al teaches the large language model (Baeuml et al, paragraph 5: “process the set of assistant outputs and context of the dialog session to generate a set of modified assistant outputs using one or more large language model (LLM) outputs generated using an LLM”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, and Baeuml et al such that an LLM is used to analyze and respond to the user input. This would have enabled the responses to be improved (Baeuml et al, paragraph 16: “the modified assistant outputs provided by the automated assistant may better resonate with the user of the client device;” the modified assistant outputs are the outputs of the LLM, as described in paragraphs 5-9 and the abstract of Baeuml et al). Chung et al teaches wherein performance analytics data from the video content is used to train the… model (Chung et al, paragraph 4: “collect a set of training videos as training data, wherein the set of training videos are labeled with one or more labels based on one or more video quality metrics associated with an evaluation objective. A machine learning model is trained based on the training data. A video to be evaluated is received. The video is assigned to a first video quality category of a plurality of video quality categories based on the machine learning model;” paragraph 5: “the video quality metric pertains to viewer retention time”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Ganz et al, Patterson et al, Baeuml et al, and Chung et al such that the machine learning model is trained using analytics data from video content. This would have enabled the invention to predict the quality of the created content based on quality categories such as performance metrics (Chung et al, paragraph 12: “the video is assigned to the first video quality category of the plurality of video quality categories based on the first video quality category having a highest likelihood score of the plurality of video quality categories”). Response to Arguments Applicant's arguments filed 3/12/2026, with regards to the previous 35 USC 103 rejections and the amended independent claims, have been fully considered but they are not persuasive. Applicant argues that Ganz et al does not disclose the amendments to the independent claims, which now describe the chat conversation responses (including the recommended action) as being in “real-time” and “immediate.” While examiner agrees with applicant’s comments that Ganz et al does not disclose the LLM and video editing, examiner disagrees with applicant that Ganz et al does not teach real-time responses. Ganz et al discloses responding to the user inputs when they are received (see Ganz et al, paragraphs 54-55: “Once a conversation is initiated…” “Once a user’s intention is determined…”). The flowchart of a conversation is also displayed in Ganz et al, fig. 2. The responses from the image manipulation system occur directly in response to the natural language inputs, and there is no indication of a delay of any kind. In addition, the use of the term “immediate” is not defined as having a specified time limit, so even if Ganz’s system were to respond slowly, it could still be interpreted as being in real-time or immediate. Thus the argument is not persuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRODERICK C ANDERSON whose telephone number is (313)446-6566. The examiner can normally be reached Monday-Tuesday, Thursday-Saturday 9-5 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at 5712724124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.C.A/Examiner, Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jul 03, 2023
Application Filed
Jun 09, 2025
Non-Final Rejection — §103
Sep 10, 2025
Response Filed
Jan 05, 2026
Final Rejection — §103
Mar 12, 2026
Request for Continued Examination
Mar 18, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572199
METHOD AND APPARATUS FOR GENERATING GROUP EYE MOVEMENT TRAJECTORY, COMPUTING DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12564337
RECURRENT NEURAL NETWORK FOR TUMOR MOVEMENT PREDICTION
2y 5m to grant Granted Mar 03, 2026
Patent 12566821
GENERATIVE SYSTEM FOR WRITING ENTITY RECOMMENDATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12561863
CREATING AND MODIFYING CIRCULAR ARCS WHILE MAINTAINING ARC QUALITIES WITHIN A DIGITAL DESIGN DOCUMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12547888
METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM FOR TRAINING IMAGE SEMANTIC SEGMENTATION NETWORK
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month