Prosecution Insights
Last updated: April 19, 2026
Application No. 17/500,338

IMAGE SEGMENTATION USING A NEURAL NETWORK TRANSLATION MODEL

Non-Final OA §103§112
Filed
Oct 13, 2021
Examiner
MOTSINGER, SEAN T
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
6 (Non-Final)
78%
Grant Probability
Favorable
6-7
OA Rounds
2y 10m
To Grant
90%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
530 granted / 679 resolved
+16.1% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
28 currently pending
Career history
707
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 679 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2026 has been entered. Response to Arguments Applicant's arguments filed 12/23/2025 have been fully considered but they are not persuasive. Applicant argues: Applicant respectfully submits that Karki and Yasutomi at least fail to teach or suggest claim 21. For example, Karki states in paragraph [0050] that "[t]o generate the synthetic normal image 112 from the abnormal image 106, the generator 102 modifies the abnormal image 106. It does so by removing a distinct region of the abnormal image 106 that corresponds to the abnormality. A residual part of the abnormal image 106 that has thus been removed from the abnormal image 106 forms the basis of the abnormal segmentation map 110." Karki further states that "The abnormal images 106 showing lesions are passed to the encoder 132, which provides them to the first decoder 136. The first decoder 136 creates the synthetic normal image 112 and an abnormal segmentation map 104, or lesion mask. A combiner 122 combines the synthetic normal image 112 and the lesion mask 104 to yield the reconstructed image 124." The cited portions of Karki, however, at least fail to teach a latent space or encoding samples into a first and second region of a latent space, let alone "one or more outputs from a first decoder that identifies and obtains one or more samples encoded into a first region of a latent space that corresponds to one or more features classified as common relative to other training data and one or more outputs from a second decoder that identifies and obtains one or more samples encoded into a second region of the latent space corresponding to one or more features classified as uncommon relative to other training data." The examiner disagrees, The examiner notes that the term latent space is broad enough to include the outputintermediate step of a neural network. The output of an neural network encoder will read on a latent space. The examiner notes that Karki features an encoder that encodes the image into a latent space. See page 3 last paragraph -page 4 first paragraph and figure 2 “The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder. Determining the values of the parameters may be performed by iteratively updating the values in a plurality of iterations. At least a first abnormal image is processed using the first image analysis component to determine (1) a synthetic image from the first abnormal image and (2) lesion data for said abnormal image.).” The examiner note that the output of the encoder corresponds to the “latent space”. The examiner notes that this latent space is used to generate both the image containing uncommon features ( i.e the lesion mask) and the image containing common features (the synthetic image). Therefore there must be some portion (possibly all ) of the latent space that corresponds to each of the common and uncommon features. These regions are decoded by the decoder into the synthetic image and the lesion data. The decoder could not function if there did not exist a region of the latent space containing the features necessary for recreating each of the synthetic image and the lesion mask Applicant argues: Yasutomi fails to remedy the deficiencies of Karki. For example, paragraphs [0029]- [0030] state that "[t]he input unit 131 inputs an output from the encoder to which an input image is input to the shade decoder and the subject decoder. Using the combining function, the combining unit 132 synthesizes a shade image that is an output from the shade decoder and a subject image that is an output from the subject decoder. The shade image is an example of the first image. The subject image is an example of the second image." and "The learning unit 133 executes learning for the encoder, the shade decoder and the subject decoder, based on a reconstruction error, a first likelihood function and a second likelihood function. The reconstruction error is an error between the input image and an output image obtained using the combining function to synthesize the shade image that is an output of the shade decoder and the subject image that is an output of the subject decoder. The first likelihood function is a likelihood function for the shade image relating to shades in ultrasound images. The second likelihood function is a likelihood function for the subject image relating to subjects in ultrasound images." However, similar to Karki, the cited portions of Yasutomi similarly at least fail to teach a latent space or encoding samples into a first and second region of a latent space, let alone "one or more outputs from a first decoder that identifies and obtains one or more samples encoded into a first region of a latent space that corresponds to one or more features classified as common relative to other training data and one or more outputs from a second decoder that identifies and obtains one or more samples encoded into a second region of the latent space corresponding to one or more features classified as uncommon relative to other training data." The examiner does not agree with this interpretation of Yasutomi however Yasutsutomi is only relied on to teach separate encoders. The examiner primarily relies upon Karki to disclose the argued features with respect to latent spaces. Applicant argues: Furthermore, it would not have been obvious to modify Karki with Yasutomi as proposed in the Office Action at least because it would render Karki in operable for its intended purpose. MPEP 2143.01(V). For example, incorporating the shade decoder and the subject decoder of Yasutomi into the generator-discriminator architecture of Karki would destroy Karki's "generator 102 [that] modifies the abnormal image 106. ... by removing a distinct region of the abnormal image 106 that corresponds to the abnormality. A residual part of the abnormal image 106 that has thus been removed from the abnormal image 106 forms the basis of the abnormal segmentation map 110." A person of ordinary skill in the art would not have reasonably expected modifying Karki with Yasutomi to yield predictable results, and the changes would instead have disrupted the core operation of Karki's system rather than improve or enhance it. The examiner disagrees, Yasutsutomi is only relied on to teach separate encoders. The examiner did not suggest incorporating the shade decoder into Karki. Applicant’s arguments pertain to a combination no suggested or used by the examiner in the rejection. One of ordinary skill in the art could have easily split the decoder of Karki into two decoder to generate the two output without departing from the intended purpose of Karki. Applicants arguments with respect to the dependent claims rely on the arguments to the independent claims and are unpersuasive because the arguments with respect to the independent claims were unpersuasive. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 21-50 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Re claim 21 the language “and obtains one or more samples encoded into a first region of a latent space” and “obtains one or more samples encoded into a second region of the latent space” it is unclear if these sampled refer back to the “one or more labeled samples” or not? The samples language is confusing in the context of this claim as the first image appears to be different than the samples. Art has been applied to the best of the examiners ability to the unclear claims. Re claims 22-26 these claims depend from claim 21 Re claim 27 the language “and obtains one or more samples encoded into a first region of a latent space” and “obtains one or more samples encoded into a second region of the latent space” it is unclear if these sampled refer back to the “one or more labeled samples” or not? The samples language is confusing in the context of this claim as the first image appears to be different than the samples. Art has been applied to the best of the examiners ability to the unclear claims. Re claims 28-32 these claims depend from claim 27. Re claim 33 the language “and obtains one or more samples encoded into a first region of a latent space” and “obtains one or more samples encoded into a second region of the latent space” it is unclear if these sampled refer back to the “one or more labeled samples” or not? The samples language is confusing in the context of this claim as the first image appears to be different than the samples. Art has been applied to the best of the examiners ability to the unclear claims. Re claims 34-38 these claims depend from claim 33. Re claim 39 the language “and obtains one or more samples encoded into a first region of a latent space” and “obtains one or more samples encoded into a second region of the latent space” it is unclear if these sampled refer back to the “one or more labeled samples” or not? The samples language is confusing in the context of this claim as the first image appears to be different than the samples. Art has been applied to the best of the examiners ability to the unclear claims. Re claims 40-44 these claims depend from claim 39. Re claim 45 the language “and obtains one or more samples encoded into a first region of a latent space” and “obtains one or more samples encoded into a second region of the latent space” it is unclear if these sampled refer back to the “one or more labeled samples” or not? The samples language is confusing in the context of this claim as the first image appears to be different than the samples. Art has been applied to the best of the examiners ability to the unclear claims. Re claims 46-50 these claims depend from claim 45. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21, 23, 25- 27, 30-33 35-37, 39,41, 42, 45 and 50 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karki US 2022/0254022 in view of Yasutomi US 2020/0226796 Re claim 21 Karki discloses (Note the provisional application is incorporated by reference the examiner is relying on the disclosure of the provisional because only what is supported by the provisional is prior art) One or more processors comprising: circuitry ( see last paragraph of provisional note that system is implemented by a processor and software ) to train (see specification of provisional first paragraph of detailed description note that the system is trained to find a mask of a lesion) a neural network to generate a segmentation mask (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) corresponding to a feature depicted in one or more images (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) using a first dataset that includes one or more labeled samples corresponding to a presence of the feature and a second dataset that includes one or more labeled samples corresponding to an absence of the feature (see page 3 second paragraph “The image retriever component receives the Electronic Medical Record (EMR) information based on the disease code, body anatomy and patient id and requests the Picture Archiving and Communication System (PACS) for the patient's image data. The retrieved patient data along with EMR data is used to designate if the patient data is normal or diseased. With this classification, two datasets are created, and the auto-annotation system is trained. “ note that two data sets are created on with which is normal corresponding to the second data set i.e labeled as not containing the feature and one which is diseased corresponding to the second data set i.e the one labeled as diseased see also page 6 first paragraph ), and to cause the neural network to generate a second image based on a first image, wherein the second image is generated to remove the feature as depicted in the first image or to add the feature that was absent in the first image (see figure 2 note that the fake normal image is a real abnormal image with the lesion removed see also provisional first paragraph). wherein the segmentation mask is to be generated based, at least in part, on the neural network decoding the first image and the second image to generate one or more outputs from a decoder and obtains one or more samples encoded into a first region of a latent space that corresponds to one or more features classified as common relative to other training data and one or more outputs from a decoder that identifies and obtains one or more samples encoded into a second region of the latent space corresponding to one or more features classified as uncommon relative to other training data. (see page2 last paragraph and page 3 first paragraph note that the decoder generates a first output with the anomaly removed images (i.e. common features) and a second output with only abnormal regions (i.e. common features) also see figure 2), see also page 3 last paragraph -page 4 first paragraph “The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder. Determining the values of the parameters may be performed by iteratively updating the values in a plurality of iterations. At least a first abnormal image is processed using the first image analysis component to determine (1) a synthetic image from the first abnormal image and (2) lesion data for said abnormal image.).” The examiner note that the output of the encoder corresponds to the “latent space” the examiner also notes that the elements of the latent space the decoder used to create the synthetic image (possibly the whole latent space) corresponds to the first region and the elements of the encoder output used to create the lesion mask (possibly the whole latent space) correspond to the second region. These elements are used by the decoder to create the two images. The decoder could not function if there did not exist portions of the latent space containing the features necessary for recreating each of the synthetic image and the lesion mask Karki does not clearly disclose wherein the one or more neural networks include first decoder and a second decoder (see figure 2 note that decoder outputs both a lesion mask(first feature) and a brain image (second feature)). Yasutomi discloses wherein the one or more neural networks include first decoder and a second decoder (see paragraph 22 and figure2). Yasutomi utilizes a similar segmentation algorithm as Karki to segment an element from a subject see figure 2). Yasutomi uses a slightly different structure two decoder to decode each feature as opposed to one decoder of Karki. One of ordinary skill in the art could have easily substituted the two decoder structur of Yasutomi for the one decoder structure of Karki. The result of this combination would be on decoder for the lesion mask of Karki and one decoder for the brain image. The results of the combination would be the same i.e the decoders of Karki would still output a brain image and a lesion mask and therefore would be predictable. Therefore it would have been obvious before the effective filing date of the claimed invention to combine Karki and Yasutomi to reach the aforementioned advantage. Re claim 23 Karki discloses wherein the circuitry is to generate the segmentation mask based on one or more parameters associated with a decoder ( see page 4 “Determining the values of the parameters is performed to reduce the ability of the second image analysis component to discriminate between the images of the normal set of images and the images formed by the first image analysis component and increase an ability to reconstruct images from the abnormal set from the outputs of the first image analysis component. Each image of the abnormal set is processed using the first image analysis component to generate annotated the images according to the lesion data produced by the first image analysis component with said image as input. The first image analysis component may include an encoder followed by a decoder, in which the encoder has a lower resolution (e.g., spatial resolution or number of signal values) at its output. For example, a 256x256 input image is reduced to 16x16 are the output of the encoder. The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder. Determining the values of the parameters may be performed by iteratively updating the values in a plurality of iterations.” note that the parameters of the decoder are determined during training). Karki does not clearly disclose wherein the one or more neural networks include first decoder and a second decoder (see figure 2 note that decoder outputs both a lesion mask(first feature) and a brain image (second feature)). Yasutomi discloses herein the one or more neural networks include first decoder and a second decoder (see paragraph 22 and figure2 ). Yasutomi utilizes a similar segmentation algorithm as Karki to segment an element from a subject see figure 2). Yasutomi uses a slightly different structure two decoder to decode each feature as opposed to one decoder of Karki. One of ordinary skill in the art could have easily substituted the two-decoder structure of Yasutomi for the one decoder structure of Karki. The result of this combination would be on decoder for the lesion mask of Karki and one decoder for the brain image. The results of the combination would be the same i.e. the decoders of Karki would still output a brain image and a lesion mask and therefore would be predictable. Therefore it would have been obvious before the effective filing date of the claimed invention to combine Karki and Yasutomi to reach the aforementioned advantage. Re claim 25 Karki discloses to cause the neural network to be trained by at least updating the neural network based, at least in part, on differences between the first mage and the second image (See figure 2 and page 6 first and second paragraph note that the difference [mean square error] between the first image and a second image with the lesion added back in is used to determine loss to train the neural network) Re claim 26 Karki further discloses wherein the one or more neural networks include a first decoder associated with a first feature type and with a second feature type (see figure 2 note that decoder outputs both a lesion mask(first feature) and a brain image (second feature) Karki does not disclose wherein the one or more neural networks include a first decoder associated with a first feature type and a second decoder associated with a second feature type. Yatsutomi discloses a first decoder associated with a first feature type and a second decoder associated with a second feature type (see paragraph 22 and figure2 note that yatsutomi discloses separate decoders for decoding different features i.e. the object and the shade). Yasutomi utilizes a similar segmentation algorithm as Karki to segment an element from a subject see figure 2). Yasutomi uses a slightly different structure two decoder to decode each feature as opposed to one decoder of Karki. One of ordinary skill in the art could have easily substituted the two decoder structure of Yasutomi for the one decoder structure of Karki. The result of this combination would be on decoder for the lesion mask of Karki and one decoder for the brain image. The results of the combination would be the same i.e the decoders of Karki would still output a brain image and a lesion mask and therefore would be predictable. Therefore it would have been obvious before the effective filing date of the claimed invention to combine Karki and Yasutomi to reach the aforementioned advantage. Re claim 27 Karki discloses (Note the provisional application is incorporated by reference the examiner is relying on the disclosure of the provisional because only what is supported by the provisional is prior art) ) A system, comprising: one or more processors ( see last paragraph of provisional note that system is implemented by a processor and software ) to train (see specification of provisional first paragraph of detailed description note that the system is trained to find a mask of a lesion) a neural network to generate a segmentation mask (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) corresponding to a feature depicted in one or more images (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) using a first dataset that includes one or more labeled samples corresponding to a presence of the feature and a second dataset that includes one or more labeled samples corresponding to an absence of the feature (see page 3 second paragraph “The image retriever component receives the Electronic Medical Record (EMR) information based on the disease code, body anatomy and patient id and requests the Picture Archiving and Communication System (PACS) for the patient's image data. The retrieved patient data along with EMR data is used to designate if the patient data is normal or diseased. With this classification, two datasets are created, and the auto-annotation system is trained. “ note that two data sets are created on with which is normal corresponding to the second data set i.e labeled as not containing the feature and one which is diseased corresponding to the second data set i.e. the one labeled as diseased see also page 6 first paragraph ), and to cause the neural network to generate a second image based on a first image, wherein the second image is generated to remove the feature as depicted in the first image or to add the feature that was absent in the first image (see figure 2 note that the fake normal image is a real abnormal image with the lesion removed see also provisional first paragraph) wherein the segmentation mask is to be generated based, at least in part, on the neural network decoding the first image and the second image to generate one or more outputs from a decoder and obtains one or more samples encoded into a first region of a latent space that corresponds to one or more features classified as common relative to other training data and one or more outputs from a decoder that identifies and obtains one or more samples encoded into a second region of the latent space corresponding to one or more features classified as uncommon relative to other training data. (see page2 last paragraph and page 3 first paragraph note that the decoder generates a first output with the anomaly removed images (i.e. common features) and a second output with only abnormal regions (i.e. common features) also see figure 2), see also page 3 last paragraph -page 4 first paragraph “The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder. Determining the values of the parameters may be performed by iteratively updating the values in a plurality of iterations. At least a first abnormal image is processed using the first image analysis component to determine (1) a synthetic image from the first abnormal image and (2) lesion data for said abnormal image.).” The examiner note that the output of the encoder corresponds to the “latent space” the examiner also notes that the elements of the latent space the decoder used to create the synthetic image (possibly the whole latent space) corresponds to the first region and the elements of the encoder output used to create the lesion mask (possibly the whole latent space) correspond to the second region. These elements are used by the decoder to create the two images. The decoder could not function if there did not exist a region of the latent space containing the features necessary for recreating each of the synthetic image and the lesion mask Karki does not clearly disclose wherein the one or more neural networks include first decoder and a second decoder (see figure 2 note that decoder outputs both a lesion mask(first feature) and a brain image (second feature)). Yasutomi discloses herein the one or more neural networks include first decoder and a second decoder (see paragraph 22 and figure2 ). Yasutomi utilizes a similar segmentation algorithm as Karki to segment an element from a subject see figure 2). Yasutomi uses a slightly different structure two decoder to decode each feature as opposed to one decoder of Karki. One of ordinary skill in the art could have easily substituted the two decoder structure of Yasutomi for the one decoder structure of Karki. The result of this combination would be on decoder for the lesion mask of Karki and one decoder for the brain image. The results of the combination would be the same i.e. the decoders of Karki would still output a brain image and a lesion mask and therefore would be predictable. Therefore it would have been obvious before the effective filing date of the claimed invention to combine Karki and Yasutomi to reach the aforementioned advantage. Re claim 30 Karki discloses wherein the one or more processors are further to use one or more encoders to encode the first dataset and the second dataset into a latent space (see figure 2 note that the data sets are input into the encoders the output of the encoders may be considered the latent space see page 4 page 4 “ The first image analysis component may include an encoder followed by a decoder, in which the encoder has a lower resolution (e.g., spatial resolution or number of signal values) at its output. For example, a 256x256 input image is reduced to 16x16 are the output of the encoder. The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder.” ) Re claim 31 Karki discloses processors are to generate the segmentation mask based at least in part on one or more scale parameters. ( see page 4 “ The first image analysis component may include an encoder followed by a decoder, in which the encoder has a lower resolution (e.g., spatial resolution or number of signal values) at its output. For example, a 256x256 input image is reduced to 16x16 are the output of the encoder. The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder.” Note that encoder scales a 256X256 input into a 16X16 output which could be considered a scale parameter. ) Re claim 32 Karki discloses wherein the feature depicted in one or more images corresponds to a first region of a latent space (see figure 2 and page 4 “ The first image analysis component may include an encoder followed by a decoder, in which the encoder has a lower resolution (e.g., spatial resolution or number of signal values) at its output. For example, a 256x256 input image is reduced to 16x16 are the output of the encoder. The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder.” The examiner notes that the output of the encoder corresponds to the latent space at least some region of the latent space must correspond each of the brain portion and the lesion as it is used by the decoder to output the fake normal image and the lesion mask ). Re claim 33 Karki discloses (Note the provisional application is incorporated by reference the examiner is relying on the disclosure of the provisional because only what is supported by the provisional is prior art) A non-transitory machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to cause a neural networks to ( see last paragraph of provisional note that system is implemented by a processor and software stored on a non transitory medium ) to train (see specification of provisional first paragraph of detailed description note that the system is trained to find a mask of a lesion) a neural network to generate a segmentation mask (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) corresponding to a feature depicted in one or more images (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) using a first dataset that includes one or more labeled samples corresponding to a presence of the feature and a second dataset that includes one or more labeled samples corresponding to an absence of the feature (see page 3 second paragraph “The image retriever component receives the Electronic Medical Record (EMR) information based on the disease code, body anatomy and patient id and requests the Picture Archiving and Communication System (PACS) for the patient's image data. The retrieved patient data along with EMR data is used to designate if the patient data is normal or diseased. With this classification, two datasets are created, and the auto-annotation system is trained. “ note that two data sets are created on with which is normal corresponding to the second data set i.e. labeled as not containing the feature and one which is diseased corresponding to the second data set i.e. the one labeled as diseased see also page 6 first paragraph ), and to cause the neural network to generate a second image based on a first image, wherein the second image is generated to remove the feature as depicted in the first image or to add the feature that was absent in the first image (see figure 2 note that the fake normal image is a real abnormal image with the lesion removed see also provisional first paragraph). wherein the segmentation mask is to be generated based, at least in part, on the neural network decoding the first image and the second image to generate one or more outputs from a decoder and obtains one or more samples encoded into a first region of a latent space that corresponds to one or more features classified as common relative to other training data and one or more outputs from a decoder that identifies and obtains one or more samples encoded into a second region of the latent space corresponding to one or more features classified as uncommon relative to other training data. (see page2 last paragraph and page 3 first paragraph note that the decoder generates a first output with the anomaly removed images (i.e. common features) and a second output with only abnormal regions (i.e. common features) also see figure 2 ), see also page 3 last paragraph -page 4 first paragraph “The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder. Determining the values of the parameters may be performed by iteratively updating the values in a plurality of iterations. At least a first abnormal image is processed using the first image analysis component to determine (1) a synthetic image from the first abnormal image and (2) lesion data for said abnormal image.).” The examiner note that the output of the encoder corresponds to the “latent space” the examiner also notes that the elements of the latent space the decoder used to create the synthetic image (possibly the whole latent space) corresponds to the first region and the elements of the encoder output used to create the lesion mask (possibly the whole latent space) correspond to the second region. These elements are used by the decoder to create the two images. The decoder could not function if there did not exist a region of the latent space containing the features necessary for recreating each of the synthetic image and the lesion mask Karki does not clearly disclose wherein the one or more neural networks include first decoder and a second decoder (see figure 2 note that decoder outputs both a lesion mask(first feature) and a brain image (second feature)). Yasutomi discloses herein the one or more neural networks include first decoder and a second decoder (see paragraph 22 and figure2 ). Yasutomi utilizes a similar segmentation algorithm as Karki to segment an element from a subject see figure 2). Yasutomi uses a slightly different structure two decoder to decode each feature as opposed to one decoder of Karki. One of ordinary skill in the art could have easily substituted the two decoder structure of Yasutomi for the one decoder structure of Karki. The result of this combination would be on decoder for the lesion mask of Karki and one decoder for the brain image. The results of the combination would be the same i.e the decoders of Karki would still output a brain image and a lesion mask and therefore would be predictable. Therefore it would have been obvious before the effective filing date of the claimed invention to combine Karki and Yasutomi to reach the aforementioned advantage. Re claim 35 Karki discloses wherein the neural network further comprises one or more encoders that encode the one or more features classified as common relative to other training data to the first region and the one or more features classified as uncommon relative to other training data to the second region (see figure 2 and page 4 “ The first image analysis component may include an encoder followed by a decoder, in which the encoder has a lower resolution (e.g., spatial resolution or number of signal values) at its output. For example, a 256x256 input image is reduced to 16x16 are the output of the encoder. The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder.” The examiner notes that the output of the encoder corresponds to the latent space at least some region (possibly all) of the latent space must correspond each of the brain portion and the lesion as it is used by the decoder to output the fake normal image and the lesion mask ). Re claim 36 Karki discloses cause the one or more processors to train the neural network by at least using one or more objective functions (see page 6 first paragraph note that the network is trained by calculating losses such as a MSE [corresponding to the objective function]). Re claim 37 Karki discloses generate the segmentation mask by at least identifying a location of the feature in the one or more images. (see figure 2 note that a lesion mask is generated which shows the location of the lesion see also detailed description first paragraph). Re claim 39 Karki discloses (Note the provisional application is incorporated by reference the examiner is relying on the disclosure of the provisional because only what is supported by the provisional is prior art) ) One or more processors, comprising: circuitry to cause ( see last paragraph of provisional note that system is implemented by a processor and software ) one or more neural networks to be used to generate a segmentation mask (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) corresponding to a feature depicted in one or more images (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) using a first dataset that includes one or more labeled samples corresponding to a presence of the feature and a second dataset that includes one or more labeled samples corresponding to an absence of the feature (see page 3 second paragraph “The image retriever component receives the Electronic Medical Record (EMR) information based on the disease code, body anatomy and patient id and requests the Picture Archiving and Communication System (PACS) for the patient's image data. The retrieved patient data along with EMR data is used to designate if the patient data is normal or diseased. With this classification, two datasets are created, and the auto-annotation system is trained. “ note that two data sets are created on with which is normal corresponding to the second data set i.e. labeled as not containing the feature and one which is diseased corresponding to the second data set i.e. the one labeled as diseased see also page 6 first paragraph ), and to cause the neural network to generate a second image based on a first image, wherein the second image is generated to remove the feature as depicted in the first image or to add the feature that was absent in the first image (see figure 2 note that the fake normal image is a real abnormal image with the lesion removed see also provisional first paragraph). wherein the segmentation mask is to be generated based, at least in part, on the neural network decoding the first image and the second image to generate one or more outputs from a decoder and obtains one or more samples encoded into a first region of a latent space that corresponds to one or more features classified as common relative to other training data and one or more outputs from a decoder that identifies and obtains one or more samples encoded into a second region of the latent space corresponding to one or more features classified as uncommon relative to other training data. (see page2 last paragraph and page 3 first paragraph note that the decoder generates a first output with the anomaly removed images (i.e. common features) and a second output with only abnormal regions (i.e. common features) also see figure 2), see also page 3 last paragraph -page 4 first paragraph “The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder. Determining the values of the parameters may be performed by iteratively updating the values in a plurality of iterations. At least a first abnormal image is processed using the first image analysis component to determine (1) a synthetic image from the first abnormal image and (2) lesion data for said abnormal image.).” The examiner note that the output of the encoder corresponds to the “latent space” the examiner also notes that the elements of the latent space the decoder used to create the synthetic image (possibly the whole latent space) corresponds to the first region and the elements of the encoder output used to create the lesion mask (possibly the whole latent space) correspond to the second region. These elements are used by the decoder to create the two images. The decoder could not function if there did not exist a region of the latent space containing the features necessary for recreating each of the synthetic image and the lesion mask Karki does not clearly disclose wherein the one or more neural networks include first decoder and a second decoder (see figure 2 note that decoder outputs both a lesion mask(first feature) and a brain image (second feature)). Yasutomi discloses herein the one or more neural networks include first decoder and a second decoder (see paragraph 22 and figure2 ). Yasutomi utilizes a similar segmentation algorithm as Karki to segment an element from a subject see figure 2). Yasutomi uses a slightly different structure two decoder to decode each feature as opposed to one decoder of Karki. One of ordinary skill in the art could have easily substituted the two decoder structure of Yasutomi for the one decoder structure of Karki. The result of this combination would be on decoder for the lesion mask of Karki and one decoder for the brain image. The results of the combination would be the same i.e the decoders of Karki would still output a brain image and a lesion mask and therefore would be predictable. Therefore it would have been obvious before the effective filing date of the claimed invention to combine Karki and Yasutomi to reach the aforementioned advantage. Re claim 41 Karki discloses wherein the neural network includes one or more encoders ( see figure 2 note that an encoder is connected to the decoder see also page 4 second paragraph). Re claim 42 Karki discloses wherein the circuitry is to generate the segmentation mask by at least generating an indication of a location of the feature in the one or more images. (see figure 2 note that a lesion mask is generated which shows the location of the lesion see also detailed description first paragraph) Re claim 45 Karki discloses (Note the provisional application is incorporated by reference the examiner is relying on the disclosure of the provisional because only what is supported by the provisional is prior art) ) A method, comprising: causing one or more neural networks to be used to to generate a segmentation mask (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) corresponding to a feature depicted in one or more images (see figure 2 note that a lesion mask is generated see also detailed description first paragraph ) using a first dataset that includes one or more labeled samples corresponding to a presence of the feature and a second dataset that includes one or more labeled samples corresponding to an absence of the feature (see page 3 second paragraph “The image retriever component receives the Electronic Medical Record (EMR) information based on the disease code, body anatomy and patient id and requests the Picture Archiving and Communication System (PACS) for the patient's image data. The retrieved patient data along with EMR data is used to designate if the patient data is normal or diseased. With this classification, two datasets are created, and the auto-annotation system is trained.” note that two data sets are created on with which is normal corresponding to the second data set i.e. labeled as not containing the feature and one which is diseased corresponding to the second data set i.e. the one labeled as diseased see also page 6 first paragraph ), and to cause the neural network to generate a second image based on a first image, wherein the second image is generated to remove the feature as depicted in the first image or to add the feature that was absent in the first image (see figure 2 note that the fake normal image is a real abnormal image with the lesion removed see also provisional first paragraph). wherein the segmentation mask is to be generated based, at least in part, on the neural network decoding the first image and the second image to generate one or more outputs from a decoder and obtains one or more samples encoded into a first region of a latent space that corresponds to one or more features classified as common relative to other training data and one or more outputs from a decoder that identifies and obtains one or more samples encoded into a second region of the latent space corresponding to one or more features classified as uncommon relative to other training data. (see page2 last paragraph and page 3 first paragraph note that the decoder generates a first output with the anomaly removed images (i.e. common features) and a second output with only abnormal regions (i.e. common features) also see figure 2), see also page 3 last paragraph -page 4 first paragraph “The decoder may take the lower resolution output of the encoder, and generate the synthetic image at the resolution of the input to the encoder. Determining the values of the parameters may be performed by iteratively updating the values in a plurality of iterations. At least a first abnormal image is processed using the first image analysis component to determine (1) a synthetic image from the first abnormal image and (2) lesion data for said abnormal image.).” The examiner note that the output of the encoder corresponds to the “latent space” the examiner also notes that the elements of the latent space the decoder used to create the synthetic image (possibly the whole latent space) corresponds to the first region and the elements of the encoder output used to create the lesion mask (possibly the whole latent space) correspond to the second region. These elements are used by the decoder to create the two images. The decoder could not function if there did not exist a region of the latent space containing the features necessary for recreating each of the synthetic image and the lesion mask Karki does not clearly disclose wherein the one or more neural networks include first decoder and a second decoder (see figure 2 note that decoder outputs both a lesion mask(first feature) and a brain image (second feature)). Yasutomi discloses herein the one or more neural networks include first decoder and a second decoder (see paragraph 22 and figure2 ). Yasutomi utilizes a similar segmentation algorithm as Karki to segment an element from a subject see figure 2). Yasutomi uses a slightly different structure two decoder to decode each feature as opposed to one decoder of Karki. One of ordinary skill in the art could have easily substituted the two decoder structure of Yasutomi for the one decoder structure of Karki. The result of this combination would be on decoder for the lesion mask of Karki and one decoder for the brain image. The results of the combination would be the same i.e the decoders of Karki would still output a brain image and a lesion mask and therefore would be predictable. Therefore it would have been obvious before the effective filing date of the claimed invention to combine Karki and Yasutomi to reach the aforementioned advantage. Re claim 50 Karki discloses wherein the one or more images comprise a medical image (see title note that the invention is directed to medical images). Claim(s) 22, 29, 38, 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karki US 2022/0254022 and Yasutomi US 2020/0226796 in view of Hope Simpson et al US 2019/0336108. Re claim 22 Karki and Yasutomi discloses all the elements of claim 21. Karki and Yasutomi does not disclose wherein the one or more processors include one or more parallel processing units (PPUs) Hope Simpson discloses wherein the one or more processors include one or more parallel processing units (PPUs ) (note that a Neural network may implemented by multiple processors arranged for parallel processing. see paragraph 32). The motivation to combine is to achieve parallel processing (see paragraph 32). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Hope Simpson to reach the aforementioned advantage. Re claim 29 Karki and Yasutomi discloses all the elements of claim 27. Karki and Yasutomi does not disclose wherein the one or more processors include one or more parallel processing units (PPUs) Hope Simpson discloses wherein the one or more processors include one or more parallel processing units (PPUs ) (note that a Neural network may implemented by multiple processors arranged for parallel processing. see paragraph 32). The motivation to combine is to achieve parallel processing (see paragraph 32). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Hope Simpson to reach the aforementioned advantage. Re claim 38 Karki and Yasutomi discloses all the elements of claim 33. Karki and Yasutomi does not disclose wherein the one or more processors include one or more parallel processing units (PPUs) Hope Simpson discloses wherein the one or more processors include one or more parallel processing units (PPUs ) (note that a Neural network may implemented by multiple processors arranged for parallel processing. see paragraph 32). The motivation to combine is to achieve parallel processing (see paragraph 32). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Hope Simpson to reach the aforementioned advantage. Re claim 40 Karki and Yasutomi discloses all the elements of claim 39. Karki and Yasutomi does not disclose wherein the one or more processors include one or more parallel processing units (PPUs) Hope Simpson discloses wherein the one or more processors include one or more parallel processing units (PPUs ) (note that a Neural network may implemented by multiple processors arranged for parallel processing. see paragraph 32). The motivation to combine is to achieve parallel processing (see paragraph 32). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Hope Simpson to reach the aforementioned advantage. Claim(s) 24, 28, 34, 43, 46, and 48 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karki US 2022/0254022 and Yasutomi US 2020/0226796 in view of in view of Ceccaldi US 2019/0046068 . Re claim 24 Karki discloses wherein the neural networks include a first encoder coupled to a first decoder ( see figure 2 note that encoder is connected to the decoder) Yatsutomi discloses a first decoder and a second decoder (see figure 2 and paragraph 22 note that different reconstructions used different encoders) They do not expressly disclose wherein the o neural network include the encoder and decoders connected via a set of long-skip connections. Ciccaldi discloses wherein the one or more neural networks include the encoder and decoders connected via a set of long-skip connections. (see paragraph 38 In an embodiment, the encoder and decoder are symmetrical, using the same number of pooling (downsampling/upsampling) layers. The symmetrical structures provide for connections between encoding and decoding stages referred to as skip connections. The skip connections help against vanishing gradients and help maintain the high frequency components of the images). Ceccaldi is in a similar art of segmentation using and decoder encoder architecture. One of ordinary skill in the art could have modified the encoder and decoders of Karki and Yatsutomi to include the long skip connection as described in Ceccaldi. The motivation to combine is to “maintain the high frequency components of the images” (see paragraph 38). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Ceccaldi to reach the aforementioned advantage. Re claim 28 Karki discloses wherein the neural network includes a first encoder coupled to a first decoder (see figure 2 note that encoder is connected to the decoder) Yatsutomi discloses a first decoder and a second decoder (see figure 2 and paragraph 22 note that different reconstructions used different encoders) They do not expressly disclos wherein the neural networks include the encoder and decoders connected via a set of long-skip connections. Ciccaldi discloses wherein the one or more neural networks include the encoder and decoders connected via a set of long-skip connections. (see paragraph 38 In an embodiment, the encoder and decoder are symmetrical, using the same number of pooling (downsampling/upsampling) layers. The symmetrical structures provide for connections between encoding and decoding stages referred to as skip connections. The skip connections help against vanishing gradients and help maintain the high frequency components of the images). Ceccaldi is in a similar art of segmentation using and decoder encoder architecture. One of ordinary skill in the art could have modified the encoder and decoders of Karki and Yatsutomi to include the long skip connection as described in Ceccaldi. The motivation to combine is to “maintain the high frequency components of the images” (see paragraph 38). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Ceccaldi to reach the aforementioned advantage. Re claim 34 Karki discloses wherein neural networksinclude a first encoder coupled to a first decoder ( see figure 2 note that encoder is connected to the decoder) Yatsutomi discloses a first decoder and a second decoder (see figure 2 and paragraph 22 note that different reconstructions used different encoders) They do not expressly disclos wherein the neural networks include the encoder and decoders connected via a set of long-skip connections. Ciccaldi discloses wherein l network includes the encoder and decoders connected via a set of long-skip connections. (see paragraph 38 In an embodiment, the encoder and decoder are symmetrical, using the same number of pooling (downsampling/upsampling) layers. The symmetrical structures provide for connections between encoding and decoding stages referred to as skip connections. The skip connections help against vanishing gradients and help maintain the high frequency components of the images). Ceccaldi is in a similar art of segmentation using and decoder encoder architecture. One of ordinary skill in the art could have modified the encoder and decoders of Karki and Yatsutomi to include the long skip connection as described in Ceccaldi. The motivation to combine is to “maintain the high frequency components of the images” (see paragraph 38). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Ceccaldi to reach the aforementioned advantage. Re claim 43 Karki and Yatsutmi disclose the features of claim 39. They do not expressly disclos wherein the neural networ0k include the one or more long-skip connections. Ciccaldi discloses wherein the network include the encoder and decoders connected via a set of long-skip connections. (see paragraph 38 In an embodiment, the encoder and decoder are symmetrical, using the same number of pooling (downsampling/upsampling) layers. The symmetrical structures provide for connections between encoding and decoding stages referred to as skip connections. The skip connections help against vanishing gradients and help maintain the high frequency components of the images). Ceccaldi is in a similar art of segmentation using and decoder encoder architecture. One of ordinary skill in the art could have modified the encoder and decoders of Karki and Yatsutomi to include the long skip connection as described in Ceccaldi. The motivation to combine is to “maintain the high frequency components of the images” (see paragraph 38). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Ceccaldi to reach the aforementioned advantage. Re claim 46 Karki and Yatsutmi disclose the features of claim 45. They do not expressly discloses wherein the neural networ0k include the one or more long-skip connections. Ciccaldi discloses wherein the network include the encoder and decoders connected via a set of long-skip connections. (see paragraph 38 In an embodiment, the encoder and decoder are symmetrical, using the same number of pooling (downsampling/upsampling) layers. The symmetrical structures provide for connections between encoding and decoding stages referred to as skip connections. The skip connections help against vanishing gradients and help maintain the high frequency components of the images). Ceccaldi is in a similar art of segmentation using and decoder encoder architecture. One of ordinary skill in the art could have modified the encoder and decoders of Karki and Yatsutomi to include the long skip connection as described in Ceccaldi. The motivation to combine is to “maintain the high frequency components of the images” (see paragraph 38). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Ceccaldi to reach the aforementioned advantage. Re claim 48 Karki discloses wherein the neural network includes a first encoder coupled to a first decoder ( see figure 2 note that encoder is connected to the decoder) Yatsutomi discloses a first decoder and a second decoder (see figure 2 and paragraph 22 note that different reconstructions used different encoders) They do not expressly disclose wherein the neural networks include the encoder and decoders connected via a set of long-skip connections. Ciccaldi discloses wherein the one or more neural networks include the encoder and decoders connected via a set of long-skip connections.s. (see paragraph 38 In an embodiment, the encoder and decoder are symmetrical, using the same number of pooling (downsampling/upsampling) layers. The symmetrical structures provide for connections between encoding and decoding stages referred to as skip connections. The skip connections help against vanishing gradients and help maintain the high frequency components of the images). Ceccaldi is in a similar art of segmentation using and decoder encoder architecture. One of ordinary skill in the art could have modified the encoder and decoders of Karki and Yatsutomi to include the long skip connection as described in Ceccaldi. The motivation to combine is to “maintain the high frequency components of the images” (see paragraph 38). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yasutomi and Ceccaldi to reach the aforementioned advantage. Claim(s) 47 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karki US 2022/0254022 in view of Wang US 2019/0371433 and Yasutomi US 2020/0226796. Re claim 47 Karki discloses using one or more encoders to generate the segmentation mask (see figure 2 note an encoder is used to create the lesion mask). Karki does not expressly disclose using one or more multi-layer perceptrons (MLPs). Wang in a similar field of segmentation discloses using a one or more multi-layer perceptrons (MLPs) to implement an encoder. One of ordinary skill in the art could have used an auto encoder as described in Wang to as the encoder in Karki, the MLP would still function as an encoder in a similar manner and yield similar results to the encoder of Karki and therefore be predictable. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki Yatsutomi and Wang. Claim(s) 44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karki US 2022/0254022 in view of Yasutomi US 2020/0226796 and Andermatt et al “Pathology Segmentation using Distributional Differences to Images of Healthy Origin”. Re claim 44 Karki and Yatsutomi disclose all the elements of claim 39 they do not expressly disclose generate the segmentation mask based at least in part on one or more shift parameters. (see section B note that the network is trained by rotations sampled from -.01 to .01 these rotations will result in pixels being shifted and are therefore shift parameters note that the trained networking is used to generate segmentations see abstract and figure 2). The motivation to combine is to augment the training data (see section B) i.e create additional training data. (see section B). One of ordinary skill in the art could have easily used the rotation parameters to shift the training data set of Karki to augment the training data. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki and Yatsutomi with Andermatt. Claim(s) 49 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karki US 2022/0254022 in view of Yasutomi US 2020/0226796 and Ioffe et al “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” arXiv:1502.03167 2015. Re claim 49 Karki and Yatsutomi disclose all the features of claim 45. They do not expressly disclose generating the segmentation mask based at least in part on one or more normalization parameters. Ioffoe discloses generating the segmentation mask based at least in part on one or more normalization parameters. (see page 8 “To enable stochastic optimization methods commonly used in deep network training, we perform the normalization for each mini-batch, and backpropagate the gradients through the normalization parameters. The motivation to combine is “Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.”. One of ordinary skill in the art could have added batch normalization to the network of Karki to accelerate training. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Karki and Yatsutomi and Ioffoe. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T MOTSINGER whose telephone number is (571)270-1237. The examiner can normally be reached 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN T MOTSINGER/Primary Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Oct 13, 2021
Application Filed
Feb 25, 2023
Non-Final Rejection — §103, §112
Jun 21, 2023
Interview Requested
Jun 29, 2023
Applicant Interview (Telephonic)
Jul 01, 2023
Examiner Interview Summary
Sep 05, 2023
Response Filed
Dec 03, 2023
Non-Final Rejection — §103, §112
Mar 19, 2024
Interview Requested
Apr 05, 2024
Applicant Interview (Telephonic)
Apr 12, 2024
Examiner Interview Summary
Jun 10, 2024
Response Filed
Sep 30, 2024
Final Rejection — §103, §112
Dec 13, 2024
Interview Requested
Dec 19, 2024
Applicant Interview (Telephonic)
Dec 23, 2024
Examiner Interview Summary
Mar 03, 2025
Request for Continued Examination
Mar 06, 2025
Response after Non-Final Action
Mar 21, 2025
Non-Final Rejection — §103, §112
Apr 04, 2025
Interview Requested
Apr 22, 2025
Applicant Interview (Telephonic)
Apr 24, 2025
Examiner Interview Summary
Jun 27, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103, §112
Oct 14, 2025
Applicant Interview (Telephonic)
Oct 15, 2025
Examiner Interview Summary
Dec 23, 2025
Request for Continued Examination
Jan 10, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103, §112
Mar 26, 2026
Examiner Interview Summary
Mar 26, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592001
EXCREMENT DETERMINATION METHOD, EXCREMENT DETERMINATION DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586198
IMAGE ANALYSIS FOR IDENTIFYING OBJECTS AND CLASSIFYING BACKGROUND EXCLUSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12573223
Hand-Drawn Graphic Recognition Method, Apparatus and System, and Computer-Readable Storage Medium
2y 5m to grant Granted Mar 10, 2026
Patent 12567149
METHOD AND APPARATUS FOR TRAINING IMAGE PROCESSING MODEL, AND STORAGE MEDIUM STORING INSTRUCTIONS TO PERFORM METHOD FOR TRAINING IMAGE PROCESSING MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12555397
METHOD AND APPARATUS FOR DECHIPERING OBFUSCATED TEXT FOR CYBER SECURITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
78%
Grant Probability
90%
With Interview (+11.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 679 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month