Prosecution Insights
Last updated: April 19, 2026
Application No. 17/061,041

IMAGE GENERATION USING ONE OR MORE NEURAL NETWORKS

Non-Final OA §101§103
Filed
Oct 01, 2020
Examiner
RUDOLPH, VINCENT M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
7 (Non-Final)
44%
Grant Probability
Moderate
7-8
OA Rounds
5y 1m
To Grant
86%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
114 granted / 260 resolved
-18.2% vs TC avg
Strong +42% interview lift
Without
With
+42.0%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
37 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 260 resolved cases

Office Action

§101 §103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/2026 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 7, 13, 19 and 25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claim recites limitations that fall under the abstract idea grouping of “Mental Processes” and “Human Activity” (Step 2A, Prong One), specifically the manipulation and generation of data representations (e.g., segmentation masks and images) based on inputs (speech information) using mathematical models such as neural networks. These are conceptual operations that could be performed in the human mind, by mathematical algorithms, or with pen and paper, and are not inherently tied to any specific improvement in computer functionality. The claim does not integrate the abstract idea into a practical application because it recites generic computer components (e.g., “one or more processors,” “circuitry,” and “one or more neural networks”) without a specific technological improvement or meaningful limitation beyond applying the abstract idea on a computer (Step 2A, Prong Two). The generation of segmentation masks and images based on speech input through neural networks and generation neural networks are a conventional application of machine learning models and does not improve the functioning of a computer or any other technology. Under Step 2B, the claim does not include additional elements that amount to significantly more than the judicial exception. The use of processors and neural networks to process data is a well-understood, routine, and conventional activity in the field of computer science and artificial intelligence. The court decisions referenced in MPEP § 2106.05(d) establish that implementing abstract ideas using conventional computer functions does not transform the claim into a patent-eligible application. Therefore, the claim is ineligible under § 101. Claims 2, 8, 14, 20 and 26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites wherein the one or more circuits are further to receive the speech input detected as uttered by the one or more users, and determine one or more descriptive features corresponding to the speech input. Again, a person can listen to a speech/utterance/voice and generate an image of what is being said in the speech/utterance/voice either in their mind or using a pen and paper, etc.. Claims 6, 12, 18, 24 and 30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites wherein the one or more circuits are further to generate one or more updated images based, at least in part, upon additional speech input received from the one or more users. Again, a person can look at an listen to additional speech/utterance/voice and generate an updated image of what is being said in the speech/utterance/voice either in their mind or using a pen and paper, etc.. Claims 3-5, 9-11, 15-17, 21-23 and 27-29 are NOT rejected under 35 USC 101, as these claims appear to have significantly more, and amount to more than is reasonable to be explained by mental processes. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7, 13, 19 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Price et al (US 20210312635) in view of Ayush et al (US 20210142539). Regarding claim 1, Price discloses one or more processors (Fig. 10 processors 1014), comprising: circuitry (Fig. 10 processors 1014) to cause one or more neural networks (¶21 the integrated segmentation system can be implemented using one or more neural networks) to execute a plurality of operations including: at least one operation to generate a first segmentation mask (¶74 At block 304, an interaction is received. The interaction can be an interactive object selection of an image by a user. Such interactive object selection can be based on an interactive action (e.g., click, scribble, bounding box, and/or language); ¶75 At block 306, segmentation of the image is performed. Image segmentation is the process of partitioning an image into at least one segment; ¶79 At block 310, a segmentation mask is generated) from first speech information corresponding to one or more users (¶27 For instance, when a spoken language command is received, a language-based segmentation method (e.g., PhraseCut) can be used)); at least one operation to generate a second segmentation mask by modifying the first segmentation mask based, at least in part, on second speech information corresponding to the one or more users (¶27 For instance, when a spoken language command is received, a language-based segmentation method (e.g., PhraseCut) can be used; ¶80 At block 312, the segmentation mask can be presented. Presentation of the segmentation mask allows a user to see and visualize the segmented area(s) of an image. The user can further interact with the image and displayed segmentation mask using additional interactive object selection(s). Such an interactive object selection can indicate further refinements that the user desires to have made to the displayed segmentation mask. From these additional interactive object selection(s), an updated segmentation mask (e.g., optimized segmentation mask) can be displayed to the user; e.g. steps 304-312 are repeated, which includes generate another segmentation mask); and at least one operation to generate one or more images (¶80 At block 312, the segmentation mask can be presented.). Price fails to specifically teach where Ayush teaches at least one operation to generate one or more images at least in part by inputting the second segmentation mask to one or more layers of one or more image generation neural networks (¶97 As shown in FIG. 8, the virtual try-on digital image generation system 102 can utilize a neural network 804 to generate the virtual try-on digital image 814. More specifically, like the neural network 602 described above, the neural network 804 can include a 12-layer U-Net; the virtual try-on digital image generation system 102 inputs the fine warped product digital image 322, the corrected segmentation mask 608, and texture translation priors 802 of the model digital image 302 into the neural network 804; ¶98 generate two outputs—an RGB rendered person image 812 and a composition mask 810; ¶99 Using these two outputs, the virtual try-on digital image generation system 102 further generates the virtual try-on digital image 814 by combining the composition mask 810, the rendered person image 812, and the fine warped product digital image 322). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of at least one operation to generate one or more images at least in part by inputting the second segmentation mask to one or more layers of one or more image generation neural networks from Ayush into the processor including neural networks for manipulating digital images as disclosed by Price. The motivation for doing this is to provide accurate and efficient digital imaging systems. Regarding claim(s) 7 (drawn to a system): The rejection/proposed combination of Price and Ayush, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the system of claim(s) 7 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 7. See further Price ¶125-130. Regarding claim(s) 13 (drawn to a method): The rejection/proposed combination of Price and Ayush, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the method of claim(s) 13 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 13. See further Price ¶125-130. Regarding claim(s) 19 (drawn to a CRM): The rejection/proposed combination of Price and Ayush, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the computer readable medium of claim(s) 19 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 19. See further Price ¶125-130 Regarding claim(s) 25 (drawn to a system): The rejection/proposed combination of Price and Ayush, explained in the rejection of processor claim(s) 1, anticipates/renders obvious the steps of the system of claim(s) 25 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 is/are equally applicable to claim(s) 25. See further Price ¶46 e.g. “data store 202 can be used to store a neural network system capable of being used to perform optimal segmentation of an image by integrating multiple segmentation methods. In particular, such optimal segmentation can be based on deep learning techniques, further discussed below with reference to integration engine 210. Such a neural network system can be comprised of one or more neural networks.”. Claim(s) 2, 6, 8, 12, 14, 20, 24, 26, 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Price and Ayush as applied to claims 1, 7, 13, 19 and 25 above, and further in view of Wilson et al (US 20220068296). Regarding claim 2, the combination of Price and Ayush discloses the one or more processors of claim 1, but fails to teach where Wilson teaches wherein the circuitry is further to receive the second speech information as uttered by the one or more users (¶26 Program 150 detects conversational utterance (step 202); ¶41 program 150 dynamically updates a generated and displayed image representation as new utterances are detected or if user feedback allows a more accurate (e.g., retrained model) generated image); and determine one or more descriptive features corresponding to the second speech information (¶27 Responsive to program 150 detecting a conversational utterance, program 150 utilizes natural language processing (NLP) techniques and corpus linguistic analysis techniques (e.g., syntactic analysis, etc.) to identify parts of speech and syntactic relations between various portions of the utterance. Program 150 utilizes corpus linguistic analysis techniques, such as part-of-speech tagging, statistical evaluations, optimization of rule-bases, and knowledge discovery methods, to parse, identify, and evaluate portions of the utterance. In an embodiment, program 150 utilizes part-of-speech tagging to identify the particular part of speech of one or more words in an utterance based on its relationship with adjacent and related words. For example, program 150 utilizes the aforementioned techniques to identity the nouns, adjectives, adverbs, and verbs). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the circuitry is further to further to receive the second speech information as uttered by the one or more users, and determine one or more descriptive features corresponding to the second speech information from Wilson into the processing method as disclosed by the combination of Price and Ayush. The motivation for doing this is to improve techniques for conversational image generation. Regarding claim 6, the combination of Price and Ayush discloses the one or more processors of claim 1, but fails to teach where Wilson teaches wherein the circuitry is further to generate one or more updated images based, at least in part, upon an additional speech input received from the one or more users (¶41 program 150 dynamically updates a generated and displayed image representation as new utterances are detected or if user feedback allows a more accurate (e.g., retrained model) generated image). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the circuitry is further to generate one or more updated images based, at least in part, upon an additional speech input received from the one or more users from Wilson into the processing method as disclosed by the combination of Price and Ayush. The motivation for doing this is to improve techniques for conversational image generation. Regarding claim(s) 8 & 12 (drawn to a system): The rejection/proposed combination of Price, Ayush and Wilson, explained in the rejection of processor claim(s) 2 & 6, anticipates/renders obvious the steps of the system of claim(s) 8 & 12 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2 & 6 is/are equally applicable to claim(s) 8 & 12. See further Price ¶125-130. Regarding claim(s) 14 (drawn to a method): The rejection/proposed combination of Price, Ayush and Wilson, explained in the rejection of processor claim(s) 2, anticipates/renders obvious the steps of the method of claim(s) 14 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2 is/are equally applicable to claim(s) 14. See further Price ¶125-130. Regarding claim(s) 20 & 24 (drawn to a CRM): The rejection/proposed combination of Price, Ayush and Wilson, explained in the rejection of processor claim(s) 2 & 6, anticipates/renders obvious the steps of the computer readable medium of claim(s) 20 & 24 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2 & 6 is/are equally applicable to claim(s) 20 & 24. See further Price ¶125-130. Regarding claim(s) 26 (drawn to a system): The rejection/proposed combination of Price, Ayush and Wilson, explained in the rejection of processor claim(s) 2, anticipates/renders obvious the steps of the system of claim(s) 26 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2 is/are equally applicable to claim(s) 26. See further Price ¶125-130. Regarding claim 30, the combination of Price and Ayush discloses the image generation system of claim 25, wherein the one or more image generation neural networks are to render objects within one or more regions indicated by the second segmentation mask (Ayush ¶97 As shown in FIG. 8, the virtual try-on digital image generation system 102 can utilize a neural network 804 to generate the virtual try-on digital image 814 (e.g. within region of segmentation mask 608)), but fails to teach where Wilson teaches the one or more processors are further to generate one or more updated images based, at least in part, upon additional speech input received from the one or more users (¶41 program 150 dynamically updates a generated and displayed image representation as new utterances are detected or if user feedback allows a more accurate (e.g., retrained model) generated image). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the one or more processors are further to generate one or more updated images based, at least in part, upon additional speech input received from the one or more users from Wilson into the processing method as disclosed by the combination of Price and Ayush. The motivation for doing this is to improve techniques for conversational image generation. Claim(s) 3-5, 9-11, 15-17, 21-23, & 27-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Price, Ayush and Wilson as applied to claims 2, 8, 14, 20, and 26 above, and further in view of by Zheng et al (US 20180137551). Regarding claim 3, the combination of Price, Ayush and Wilson discloses the one or more processors of claim 2, wherein the second segmentation mask is a semantic segmentation mask that includes a plurality of regions associated with a respective object type to be rendered in the one or more images (Price ¶52, ¶76, ¶89 the deep learning techniques can include instance-level semantic segmentation), but fails to teach wherein the one or more neural networks include a first one or more neural networks to infer a semantic segmentation mask based at least in part upon the one or more descriptive features corresponding to the second speech information. Zheng teaches wherein the one or more neural networks include a first one or more neural networks to infer a semantic segmentation mask based at least in part upon the one or more descriptive features corresponding to the second speech information (¶103-105 The input query image may be masked by the deep neural network 804 to exclude regions that are not sufficiently related to the visual text content. An example of the mask is binary mask 1002, which allows only the portion of the photograph 902 (near the center of the photograph 902 in this case) that contains the visual text content to be passed through). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the one or more neural networks include a first one or more neural networks to infer a semantic segmentation mask based at least in part upon the one or more descriptive features corresponding to the second speech information from Zheng into the processor as disclosed by the combination of Price, Ayush and Wilson. The motivation for doing this is to improve search results by refining a user input. Regarding claim 4, the combination of Price, Ayush, Wilson, Wilson and Zheng discloses the one or more processors of claim 3, wherein the one or more neural networks include a second one or more neural networks that comprise the one or more image generation neural networks and are to generate the one or more images at least in part by inputting the semantic segmentation mask to one or more intermediate layers of the one or more image generation neural networks, wherein the first one or more neural networks are distinct with respect to the second one or more neural networks (¶49 Ayush the system environment can include one or more neural networks as part of the virtual try-on digital image generation system 102, stored within a database, included as part of the client application 110, or hosted on the server(s) 104; ¶97 As shown in FIG. 8, the virtual try-on digital image generation system 102 can utilize a neural network 804 to generate the virtual try-on digital image 814. More specifically, like the neural network 602 described above, the neural network 804 can include a 12-layer U-Net; the virtual try-on digital image generation system 102 inputs the fine warped product digital image 322, the corrected segmentation mask 608, and texture translation priors 802 of the model digital image 302 into the neural network 804; ¶98 generate two outputs—an RGB rendered person image 812 and a composition mask 810; ¶99 Using these two outputs, the virtual try-on digital image generation system 102 further generates the virtual try-on digital image 814 by combining the composition mask 810, the rendered person image 812, and the fine warped product digital image 322). The motivation to combine the references is discussed above in the rejection for claim 1. Regarding claim 5, the combination of Price, Ayush, Wilson, Wilson and Zheng discloses the one or more processors of claim 4, wherein the second one or more neural networks are further to generate the one or more images based further upon one or more style vectors determined from the second speech information, the one or more style vectors corresponding to a visual aspect of one or more objects represented in the semantic segmentation mask (Zheng ¶42 The speech recognition component 210 may convert audio signals (e.g., spoken utterances) into text; ¶59 A feature extraction component operates to convert raw audio waveform to some-dimensional vector of numbers that represents the sound; ¶92-94 An image signature block 808 may produce a binary hash or “image signature” that concisely describes an image or image portion, such as the localized and isolated visual text content; Each image signature may comprise a vector of binary numbers for example, also referred to as a binary hash; The visual similarity measure may be based on the image signature or hash value that semantically represents a localized and isolated visual text portion, for example). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the second neural network is further to generate the one or more images based further upon one or more style vectors determined from the second speech information, the one or more style vectors corresponding to a visual aspect of one or more objects represented in the semantic segmentation mask from Zheng into the processor as disclosed by the combination of Price, Ayush and Wilson. The motivation for doing this is to improve search results by refining a user input. Regarding claim(s) 9-11 (drawn to a system): The rejection/proposed combination of Price, Ayush, Wilson and Zheng, explained in the rejection of processor claim(s) 3-5, anticipates/renders obvious the steps of the system of claim(s) 9-11 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 3-5 is/are equally applicable to claim(s) 9-11. See further Price ¶125-130 & Zheng ¶64 & ¶72. Regarding claim(s) 15-17 (drawn to a method): The rejection/proposed combination of Price, Ayush, Wilson and Zheng, explained in the rejection of processor claim(s) 3-5, anticipates/renders obvious the steps of the method of claim(s) 15-17 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 3-5 is/are equally applicable to claim(s) 15-17. See further Price ¶125-130 & Zheng ¶64 & ¶72. Regarding claim(s) 21-23 (drawn to a CRM): The rejection/proposed combination of Price, Ayush, Wilson and Zheng, explained in the rejection of processor claim(s) 3-5, anticipates/renders obvious the steps of the computer readable medium of claim(s) 21-23 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 3-5 is/are equally applicable to claim(s) 21-23. See further Price ¶125-130 & Zheng ¶64 & ¶72. Regarding claim(s) 27-29 (drawn to a system): The rejection/proposed combination of Price, Ayush, Wilson and Zheng, explained in the rejection of processor claim(s) 3-5, anticipates/renders obvious the steps of the system of claim(s) 27-29 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 3-5 is/are equally applicable to claim(s) 27-29. See further Price ¶125-130 & Zheng ¶64 & ¶72. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Price and Ayush as applied to claims 13 above, and further in view of Kong et al (US 20190164322). Regarding claim 18, the combination of Price and Ayush discloses the method of claim 13, but fails to teach where Kong teaches further comprising: causing the one or more neural networks (¶36 data store 202 can be used to store a neural network system capable of being used to segment an image using deep learning techniques, further discussed below with reference to segmentation engine 206. Such a neural network system can be comprised of one or more neural networks) to generate one or more updated images based, at least in part, upon an additional speech input obtained from the one or more users (¶43 For example, a user may select or input a menu command, a mouse or touch input using a touch and/or click, lasso and/or marque tool and/or a voice input to “Show Selections,” “Show Segments,” or “Show Masks,” or some combination thereof; ¶¶77 An NUI may implement any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on computing device 700; ¶72 the image can be edited using the selected mask), wherein: the second segmentation mask includes a semantic segmentation mask (¶65 The deep learning techniques can include instance-level semantic segmentation; ¶66 At block 506, the segments as generated at block 504, can be presented as selectable masks); and the additional speech input is used to update the semantic segmentation mask to include a plurality of individual regions associated with respective object types (¶69 For instance, in a portrait of an individual, the image can be segmented into a segment of the individual's face and a segment of the background or, if more detail is desired, into segments of the individual's eyes, teeth, hair, etc.; ¶70 vocally indicates a selection; ¶72 At block 612, the image can be edited using the selected mask. Current editing manipulations can be displayed the editing zone graphical user interface of an image editing system. Such edits can include changes made to levels, curves, hue and saturation, black and white, vibrance, color balance, etc). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of further comprising: causing the one or more neural networks to generate one or more updated images based, at least in part, upon an additional speech input obtained from the one or more users wherein: the second segmentation mask includes a semantic segmentation mask and the additional speech input is used to update the semantic segmentation mask to include a plurality of individual regions associated with respective object types from Kong into the processing method as disclosed by the combination of Price and Ayush. The motivation for doing this is to improve methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. Response to Arguments Applicant's arguments filed 1/20/2026 have been fully considered but they are not persuasive. Regarding claims 1, 7, 13, 19, and 25, the applicant argues that the claims are not directed to a mental process because the claim recites execution of operations by a neural network, which cannot be practically performed in the human mind. The examiner respectfully disagrees. The operations of claims 1, 7, 13, 19, and 25 involve processing, analyzing, and transforming information using mathematical models, including neural networks. Neural networks are fundamentally mathematical constructs that perform mathematical operations such as matrix multiplication, vector transformations, and nonlinear activation functions. The recited operations correspond to concepts that can be performed mentally or conceptually, such as interpreting speech describing objects, determining spatial arrangement of objects, modifying object arrangements, and producing visual representations. The claims merely recites performing these operations using generic computer components and neural networks as tools. The mere recitation of generic computer components, including processors and neural networks, does not negate the abstract nature of the claimed operations. Applicant’s argument that the claim integrates the abstract idea into a practical application is not persuasive. Claims 1, 7, 13, 19, and 25 merely recites using one or more processors and neural networks to generate and modify segmentation masks and generate images based on those masks. These limitations describe the use of generic computer components and performing their ordinary functions of processing input data and producing output data. The claim does not recite and specific improvement to computer functionality, neural network architecture, image generation technology, or processor application. Instead, neural networks are invoked as tools to perform abstract data processing operations. Applicant’s argument that claims 1, 7, 13, 19, and 25 recite significantly more is also not persuasive. The additional elements recited in the claims, including processors, circuitry, and neural networks, are generic computer components performing their conventional functions of processing and transforming data. The claims do not recite any specialized hardware, unconventional configuration, or technological improvement to computer functionality. Instead, the claim merely applies the abstract idea using generic computing components. Applicant’s additional arguments with respect to claim(s) 1-30 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to whose telephone number is (571)272-7648. The examiner can normally be reached Monday-Friday 9-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN KY/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Oct 01, 2020
Application Filed
Jul 28, 2022
Non-Final Rejection — §101, §103
Feb 03, 2023
Response Filed
Feb 24, 2023
Final Rejection — §101, §103
Jun 23, 2023
Applicant Interview (Telephonic)
Jun 23, 2023
Examiner Interview Summary
Sep 05, 2023
Notice of Allowance
Dec 05, 2023
Response after Non-Final Action
Dec 05, 2023
Response after Non-Final Action
Dec 11, 2023
Response after Non-Final Action
Dec 14, 2023
Response after Non-Final Action
Jan 02, 2024
Response after Non-Final Action
Jan 03, 2024
Response after Non-Final Action
Mar 06, 2024
Non-Final Rejection — §101, §103
Sep 13, 2024
Response Filed
Oct 03, 2024
Final Rejection — §101, §103
Apr 08, 2025
Request for Continued Examination
Apr 09, 2025
Response after Non-Final Action
Apr 26, 2025
Non-Final Rejection — §101, §103
Jun 16, 2025
Examiner Interview Summary
Jun 16, 2025
Applicant Interview (Telephonic)
Sep 02, 2025
Response Filed
Sep 12, 2025
Final Rejection — §101, §103
Nov 24, 2025
Interview Requested
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12525104
SURVEILLANCE SYSTEM AND SURVEILLANCE DEVICE
2y 5m to grant Granted Jan 13, 2026
Patent 12492533
SYSTEM AND METHOD OF CONTROLLING CONSTRUCTION MACHINERY
2y 5m to grant Granted Dec 09, 2025
Patent 12430871
OBJECT ASSOCIATION METHOD AND APPARATUS AND ELECTRONIC DEVICE
2y 5m to grant Granted Sep 30, 2025
Patent 12333853
FACE PARSING METHOD AND RELATED DEVICES
2y 5m to grant Granted Jun 17, 2025
Patent 12321856
METHOD, COMPUTER PROGRAM AND DEVICE FOR EVALUATING THE ROBUSTNESS OF A NEURAL NETWORK AGAINST IMAGE DISTURBANCES
2y 5m to grant Granted Jun 03, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
44%
Grant Probability
86%
With Interview (+42.0%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 260 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month