Prosecution Insights
Last updated: April 19, 2026
Application No. 18/296,840

FEATURE-AWARE DEEP-LEARNING-BASED SMART HARMONIC IMAGING FOR ULTRASOUND

Non-Final OA §103§112
Filed
Apr 06, 2023
Examiner
KOETH, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Canon Medical Systems Corporation
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
331 granted / 429 resolved
+15.2% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 429 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 27, 2026 has been entered. Response to Arguments Applicant’s arguments and amendments in the Amendment filed with a Request for Continued Examination on February 27, 2026 (herein “Amendment”), with respect to the rejection of the amended independent claims under 35 U.S.C. 101 for reciting ineligible subject matter have been fully considered and are persuasive. The rejection of all independent claims and claims depending therefrom under 35 U.S.C. 101 has been withdrawn. Applicant's arguments and amendments filed in the Amendment have been fully considered but they are not persuasive. The amendments to the independent claims include subject matter recited in dependent claims that had already been examined and rejected over the combination of Fouad in view of Mammone. First, Applicant sets forth on page 8 that the limitations from previously pending claim 4 of “the processing circuitry is configured to … apply the second ultrasound data to another trained deep neural network model that outputs a denoised second order harmonic image, the another deep neural network having been trained with training data including input ultrasound data and corresponding de-noised ultrasound data,” and cites to Figure 16 and paragraphs 89 and 90 in the ‘767 application (Mammone) “for all of those features,” which incorrectly restates the rejection of record. On the contrary, regarding the limitations of previously recited claim 4, Mammone is relied upon only for the “another trained neural network outputs a second-order harmonic image” limitation. Accordingly, Mammone is not relied upon “for all those features,” as Applicant contends, and for the majority of claim 4, where Fouad was relied upon, Applicant fails to respond to the cited portions and application of Fouad for those limitations. Nonetheless, Applicant sets forth on page 9 conclusory remarks regarding Fouad, rather than discussing the cited passages pages 239–241 and 243, and the detailed explanation/rationale of the passages in those cited pages provided by the examiner, and repeated/maintained in this action below. Instead, Applicant on page 9 broadly characterizes Fouad in taking language from two sentences of Fouad’s abstract, then states that “In particular, as shown in Figure 1B, the Fouad reference discloses a network that takes as input, the full-harmonic content IQ data and outputs low-harmonic content IQ data.” However, in restating sentences from the Abstract, and broadly generalizing what is shown in one of the figures, Applicant has failed to address the cited portions, pages 239–241 and 243, and detailed rationale provided specific to the limitations upon which Fouad was relied upon. Therefore, Applicant’s remarks are entirely non-responsive regarding the application of Fouad on the claimed limitations. In turning next to secondary reference Mammone, in the remarks on pages 9–10, Applicant broadly characterizes Mammone by restating verbatim claim 1 of Mammone, and then focusing on Mammone’s Figure 16, which was cited in the Final Office Action issued November 6, 2025 (herein “Final Action”), on pages 15–16 regarding the claimed “the another trained neural network model outputs a second-order harmonic image” and “ Applicant argues on page 11 of the Amendment that Mammone does not teach or suggest “wherein the trained deep neural network was trained to extract a third-order harmonic component based on the input first ultrasound data,” however, the Final Action on pages 12–13 sets forth that this limitation is rejected under the combination of Fouad in view of Mammone in that Fouad at least teaches the trained deep neural network being trained to extract harmonics based on the input first ultrasound data, just not explicitly the third harmonic, for which Mammone was relied upon. Mammone teaches in Fig. 16, cited paragraph 89 that “the down-converted frequencies incorporate frequencies at third and higher harmonics” and in paragraph 90 that “the bank of inverse FFT 1614 to generate time domain signals 1616a-1616n (collectively 1616) at each of the harmonic frequencies” where Fig. 16 explicitly shows harmonic frequencies F1, F2 and Fn, therefore considering that the down-converted frequencies are taught to include third and higher harmonics, the bank of inverse FFTs include an IFFT that would be generating a time domain signal in the third harmonic. Therefore, the rejection sets forth a combination of Fouad’s DeepH neural network outputting harmonic images and thus harmonics, modified by Mammone’s teaching of a third-order harmonic component. In response to applicant's argument that Mammone does not teach all of the claimed limitations of “wherein the trained deep neural network was trained to extract a third-order harmonic component based on the input first ultrasound data,” the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). Lastly, Applicant argues on pages 11–12 that “far from being a trained deep neural networks that outputs a denoised second-order harmonic image, … the classifiers 1622a, 1622b, and 1622c take as input various tissue characteristic parameters and output indications of whether a region is cancerous or not,” however, first, Mammone was only relied upon to cure deficiencies of Fouad regarding the “another trained neural network model” and that what is output is a second-order harmonic image, as Fouad at least teaches that its DeepH neural network outputs a denoised low harmonic image, just not explicitly a second order one. Mammone exactly teaches in paragraph 90: “The classifiers 1622 may include neural network(s), as understood in the art, that may be trained,” and therefore Mammone teaches the classifiers as being trained deep neural networks. Further, in paragraph 90, Mammone teaches: PNG media_image1.png 290 406 media_image1.png Greyscale Therefore, Mammone teaches that the end result of the classifiers 1622 (as regions of interest) as further processed through aggregation by a summer is a composite A-scan or B-scan that is displayed (thus an image of some sort). Those A-scan or B-scans are based off of the earlier processed second and higher harmonic signals, and as a composite A-scan include region of interest data from the second and higher harmonic signals. Accordingly, the composite A-scan is a second order harmonic image as it includes data from the second order harmonic signal. It is noted that contrary to Applicant’s contention that the classifiers 1622a, 1622b and 1622c output indications of whether a region is cancerous or not, paragraph 90 of Mammone, as shown above, teaches that other types of output such as B-scans may be output by the post-processor 1620 (where 1620 is shown as including the individual classifiers 1622a-b). Accordingly, while all of Applicant’s arguments and amendments have been fully considered, they are not persuasive. In view of the combination of Fouad and Mammone, as set forth in the record, with reliance upon both of Fouad and Mammone for the limitations at issue, the rejection of the claims under 35 U.S.C. is proper, and is herein maintained. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 13 and 20, and therefore claims 7–12, 18–19 and 21 which depend therefrom are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 13 and 20 recite the limitation fuse (or fusing) “the third-order harmonic image” in the next to last line of the claim. There is insufficient antecedent basis for this limitation in the claim. Earlier recited in the claim is ”a third-order harmonic component” but not an image. It is unclear and indefinite whether the “third-order harmonic image” is intended to be a new limitation altogether, or if it is intended to find some basis in the earlier recited “third-order harmonic component.” For the purposes of examination, this limitation is interpreted as an image that has a third-order harmonic component within it. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 7–9, 12–13, and 18–20 are rejected under 35 U.S.C. 103 as being unpatentable over Fouad NPL as set forth above regarding the independent claims, and further in view of Mammone, US Patent Application Publication No. US 2013/0023767 A1 (herein “Mammone”). Regarding claims 1, 13 and 20, with claim 1 as exemplary, and major claim limitation differences noted following this format: [claim 1/claim 13/claim 20], and with deficiencies of Fouad NPL noted in curly brackets {}, Fouad NPL teaches [an apparatus, comprising: processing circuitry configured to/a method comprising/a non-transitory computer-readable medium storing a program that, when executed by processing circuitry, causes the processing circuitry to perform a method comprising:] (Fouad NPL Abstract, page 241, section B, and page 248 section D, a system for harmonic imaging with a deep neural network including Tesla P100 GPU (processor) for training, and teachings of using faster GPUs for the inferencing after training, where a GPU includes a non-transitory memory) receive first ultrasound data including at least one harmonic component (Fouad NPL pages 239–240, 244–245, fig. 1, section (IV)(A), testing dataset for the DeepH neural network harmonic imaging model including B-mode (ultrasound) images acquired using full-aperture input signals disclosed in section III(A) as being full harmonic content signals); apply the first ultrasound data to inputs of a trained deep neural network model that outputs enhanced ultrasound image data (Fouad NPL pages 239–241, section III(A), fig. 1, Abstract, neural network architecture including the DeepH (deep neural network) model receiving as input (apply to inputs) the full harmonic content IQ data and outputting low-harmonic content IQ data which is a reconstructed signal that focuses on desired signal components over the noise components, resulting in output images with enhanced contrast resolution and reduced reverberation clutter (enhanced)), the deep neural network model having been trained with training data including input ultrasound data and corresponding target ultrasound data having predetermined target features (Fouad NPL page 241, section B, network training using beamformed IQ scanlines as training samples with ground-truth low-harmonic content signals (corresponding target ultrasound data), where page 239, section III(A) teaches the neural networking trained to extract features in input signals resulting in a compact latent feature space (predetermined target features)), wherein the trained deep neural network was trained to extract a {third}-order harmonic component based on the input first ultrasound data, (Fouad NPL pages 240–241, fig. 1, section III(A), harmonic images are generated as an output of the DeepH neural network, trained to do so); and display the enhanced ultrasound image data (Fouad NPL page 240, fig. 1, Abstract, linear subtraction between the input full-harmonic content IQ data and the low-harmonic content IQ data resulting in an output ultrasonic image with enhanced contrast resolution and reduced reverberation clutter, where fig. 5 shows the display of the enhanced ultrasound image data), wherein the first ultrasound data includes a fundamental frequency component and [third-order] harmonic component (Fouad NPL page 243, section 3, the acquired raw RF signals were sampled to capture the fundamental frequency), and the processing circuitry is further configured to: receive second ultrasound data including a second-order harmonic component (Fouad NPL page 243, section 3, acquired raw RF signals sampled to capture the fundamental frequency along with the second harmonics); apply the second ultrasound data to [another] trained deep neural network model that outputs a de-noised [second]-order harmonic image (Fouad NPL pages 239–241, section III(A), fig. 1, Abstract, neural network architecture including the DeepH (deep neural network) model receiving as input (apply to inputs) the full harmonic content IQ data and outputting low-harmonic content IQ data which is a reconstructed signal that focuses on desired signal components over the noise components, resulting in output images with enhanced contrast resolution and reduced reverberation clutter (de-noised)), the [another] deep neural network having been trained with training data including input ultrasound data and corresponding de-noised ultrasound data (Fouad NPL page 241, section B, network training using beamformed IQ scanlines as training samples with ground-truth low-harmonic content signals (corresponding de-noised ultrasound data)); and fuse the [third]-order harmonic image and the de-noised [second] order harmonic image to generate a fused image (Fouad NPL Fig. 1, page 240, low-harmonic content IQ data which is a denoised is combined via linear subtraction with the full-harmonic content IQ data to generate a resultant harmonic (fused) image). Fouad NPL does not explicitly teach that its trained deep neural network extracts “a third-order harmonic component.” Although Fouad NPL teaches full-aperture input signals disclosed in section III(A) as being full harmonic content signals, Fouad NPL does not explicitly teach specifically a third-order harmonic component in the input signal. Fouad NPL further does not explicitly teach another trained neural network model that outputs a second-order harmonic image. However, Mammone teaches a third-order harmonic component (Mammone ¶¶89–90, input ultrasound signals to neural network classifiers for ultrasonic image processing including third harmonic responses). Mammone further teaches extract a third-order harmonic component (Mammone ¶¶89–90, fig. 16, harmonic frequency image data, including a third harmonic response, are extracted via IFFT block 1614, and respectively input to pre-processing unit 1618 then classifiers 1622, which can be a neural network, and the output is aggregated or summed to be displayed as a composite (fused) A-scan but could also be a B-scan or other types of output). Mammone still further teaches the another trained neural network model outputs a second-order harmonic image (Mammone ¶¶89–90, fig. 16, harmonic frequency image data, including a third and second harmonic responses are processed downstream by classifiers 1622, which can be a neural network, and the output is aggregated or summed to be displayed as a composite (fused) A-scan but could also be a B-scan or other types of output). Therefore, taking the teachings of Fouad NPL and Mammone together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the input ultrasound images and the output of the DeepH model of Fouad NPL to respectively include the third harmonic responses and an aggregate of second and third harmonic response data from additional neural networks disclosed in Mammone at least because doing so would allow for better identification of regions of interest in an ultrasound image (Mammone ¶90). Regarding claims 7 and 18, with claim 7 being exemplary and with deficiencies of Fouad NPL noted with square brackets, Fouad NPL teaches wherein the first ultrasound data includes the fundamental frequency component, a second-order harmonics component, and the [third-order] harmonics component (Fouad NPL page 243, section 3, acquired raw RF signals sampled to capture the fundamental frequency along with the second harmonics); and the processing circuitry is further configured to apply the first ultrasound data to the trained deep neural network to generate the enhanced ultrasound image data (Fouad NPL pages 239–241, section III(A), fig. 1, Abstract, neural network architecture including the DeepH (deep neural network) model receiving as input (apply to inputs) the full harmonic content IQ data and outputting low-harmonic content IQ data which is a reconstructed signal that focuses on desired signal components over the noise components, resulting in output images with enhanced contrast resolution and reduced reverberation clutter (enhanced)), wherein the trained deep neural network model is trained to extract [third-order] harmonics data and [second-order] harmonics data from the first ultrasound data, and generate the enhanced ultrasound image data (Fouad NPL fig. 1, pages 239–240, section III(A), Abstract, output of DeepH including low-harmonic content from the input full-harmonic content and using linear subtraction to generate the resultant harmonic ultrasound image with enhanced contrast). Although Fouad NPL teaches full-aperture input signals disclosed in section III(A) as being full harmonic content signals, Fouad NPL does not explicitly teach specifically a third-order harmonic component in the input signal. However, Mammone teaches a third-order harmonic component (Mammone ¶¶89–90, input ultrasound signals to neural network classifiers for ultrasonic image processing including third harmonic responses). Further, Fouad NPL does not explicitly teach extracting third-order harmonics data and second-order harmonics data. Mammone teaches extracting third-order harmonics data and second-order harmonics data (Mammone ¶¶89–90, ultrasound signal is down-converted to determine the second, third and higher harmonic responses of the ultrasonic tissue scan). Therefore, taking the teachings of Fouad NPL and Mammone together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the output of the DeepH model of Fouad NPL to include downsampling to determine a second and third harmonic response data as disclosed in Mammone at least because doing so would allow for better identification of regions of interest in an ultrasound image (Mammone ¶90). Regarding claim 8, with deficiencies of Fouad NPL noted with square brackets, Fouad NPL teaches wherein the first ultrasound data include a high-order harmonics component [greater than third-order] (Fouad NPL pages 239–240, 244–245, fig. 1, section (IV)(A), testing dataset for the DeepH neural network harmonic imaging model including B-mode (ultrasound) images acquired using full-aperture input signals disclosed in section III(A) as being full harmonic content signals), and wherein the trained deep neural network model reduces noise and generates an estimated high-order image from the first ultrasound data (Fouad NPL pages 239–241, section III(A), fig. 1, Abstract, neural network architecture including the DeepH (deep neural network) model receiving as input (apply to inputs) the full harmonic content IQ data and outputting low-harmonic content IQ data (any harmonic content being of high-order compared to the fundamental frequency) which is a reconstructed signal that focuses on desired signal components over the noise components, resulting in output images with enhanced contrast resolution and reduced reverberation clutter (enhanced)). Fouad NPL does not explicitly teach high-order harmonics component greater than third-order. Mammone teaches high-order harmonics component greater than third-order (Mammone ¶¶89–90, fig. 16, harmonic frequency image data, including second, third and higher (than third) harmonic responses are sent respectively to their own neural network classifier for processing). Therefore, taking the teachings of Fouad NPL and Mammone together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the output of the DeepH model of Fouad NPL to include third and higher harmonic response data as disclosed in Mammone at least because doing so would allow for better identification of regions of interest in an ultrasound image (Mammone ¶90). Regarding claim 9, Fouad NPL teaches wherein the enhanced ultrasound image data is enhanced for a predetermined depth (Fouad NPL page 246, section B, the performance of the DeepH model was evaluated at different depths, where page 241, section D teaches that the depth is affected by stacking of different tissue layers, and fig. 7 page 247 illustrates the enhancement in the ultrasound image data for three different tissue layer types). Regarding claim 12, Fouad NPL teaches wherein enhanced ultrasound image data is a B-mode ultrasound image (Fouad NPL page 247, fig. 7, three examples of reconstructed B-mode images from the DeepH harmonic image processing). Regarding claim 19, with deficiencies of Fouad NPL noted with square brackets, Fouad NPL teaches wherein the first ultrasound data include a high-order harmonics component [greater than third-order] (Fouad NPL pages 239–240, 244–245, fig. 1, section (IV)(A), testing dataset for the DeepH neural network harmonic imaging model including B-mode (ultrasound) images acquired using full-aperture input signals disclosed in section III(A) as being full harmonic content signals), and the enhanced ultrasound image data is a de-noised high-order harmonic image (Fouad NPL pages 239–241, section III(A), fig. 1, Abstract, neural network architecture including the DeepH (deep neural network) model outputting low-harmonic content IQ data which is a reconstructed signal that focuses on desired signal components over the noise components, resulting in output images with enhanced contrast resolution and reduced reverberation clutter (de-noised), where the resultant harmonic includes harmonic image having harmonics other than the fundamental and therefore high-order). Fouad NPL does not explicitly teach high-order harmonics component greater than third-order. Mammone teaches high-order harmonics component greater than third-order (Mammone ¶¶89–90, fig. 16, harmonic frequency image data, including second, third and higher (than third) harmonic responses are sent respectively to their own neural network classifier for processing). Therefore, taking the teachings of Fouad NPL and Mammone together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the output of the DeepH model of Fouad NPL to include third and higher harmonic response data as disclosed in Mammone at least because doing so would allow for better identification of regions of interest in an ultrasound image (Mammone ¶90). Claims 10–11 are rejected under 35 U.S.C. 103 as being unpatentable over Fouad NPL and Mammone as set forth above regarding claim 1 from which claims 10–11 depend, further in view of Ammirati et al., US Patent Application Publication No. US 2023/0260663 A1 (herein “Ammirati”). Regarding claim 10, while Fouad NPL teaches the predetermined target features of the enhance harmonic image (Fouad NPL page 241, section B, network training using beamformed IQ scanlines as training samples with ground-truth low-harmonic content signals (corresponding target ultrasound data), where page 239, section III(A) teaches the neural networking trained to extract features in input signals resulting in a compact latent feature space (predetermined target features)), Fouad NPL does not explicitly teach that the features relate to a particular range of body mass index. Ammirati teaches relate to a particular range of body mass index (Ammirati ¶¶87, 94, 105–106, features extracted from ultrasound images for machine learning based processing including the body mass index, wherein the range extracted defines a particular range). Therefore, taking the teachings of Fouad NPL and Ammirati together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the target features of Fouad NPL to include body mass index values as disclosed in Ammirati at least because doing so would allow for enhancing medical image processing by finding new patterns and phenotypes and providing alternative diagnoses (Ammirati ¶17). Regarding claim 11, while Fouad NPL teaches the predetermined target features of the enhance harmonic image (Fouad NPL page 241, section B, network training using beamformed IQ scanlines as training samples with ground-truth low-harmonic content signals (corresponding target ultrasound data), where page 239, section III(A) teaches the neural networking trained to extract features in input signals resulting in a compact latent feature space (predetermined target features)), Fouad NPL does not explicitly teach that the features relate to particular demographic information. Ammirati teaches relate to particular demographic information (Ammirati ¶¶87, 94, 105–106, features extracted from ultrasound images for machine learning based processing including weight, height, age and sex, which are all particular demographic information). Therefore, taking the teachings of Fouad NPL and Ammirati together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the target features of Fouad NPL to include age, sex, weight values as disclosed in Ammirati at least because doing so would allow for enhancing medical image processing by finding new patterns and phenotypes and providing alternative diagnoses (Ammirati ¶17). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Fouad NPL and Mammone as set forth above regarding claim 1 from which claim 21 depends, further in view of Honjo et al., US Patent Application Publication No. US 2020/0342592 A1 (herein “Honjo”). Regarding claim 21, Fouad NPL as modified by Mammone above does not explicitly teach, but Honjo teaches wherein the trained deep neural network was trained to extract only the third-order harmonic component (Honjo Fig. 8B, ¶¶82, 104, processing circuitry 110 including trained model B trained using the third-order harmonic image as training data, the processing circuitry 110 extracts a signal having a high-harmonic component corresponding to a frequency component at the frequency 3f (extract only the third-order harmonic component)). Therefore, taking the teachings of Fouad NPL as modified by Mammone, and Honjo together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the training of the neural network of Fouad NPL to include the extraction of the third-harmonic component as disclosed in Honjo at least because doing so would allow for obtaining an image having high image quality, and efficiently obtain diagnosing information (Honjo ¶¶83 and169). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MICHELLE M. KOETH Primary Examiner Art Unit 2671 /MICHELLE M KOETH/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Apr 06, 2023
Application Filed
Jun 11, 2025
Non-Final Rejection — §103, §112
Oct 13, 2025
Response Filed
Nov 04, 2025
Final Rejection — §103, §112
Feb 27, 2026
Request for Continued Examination
Mar 02, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586221
METHOD AND APPARATUS FOR ESTIMATING DEPTH INFORMATION OF IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12579651
IMPEDED DIFFUSION FRACTION FOR QUANTITATIVE IMAGING DIAGNOSTIC ASSAY
2y 5m to grant Granted Mar 17, 2026
Patent 12567241
Method For Generating Training Data Used To Learn Machine Learning Model, System, And Non-Transitory Computer-Readable Storage Medium Storing Computer Program
2y 5m to grant Granted Mar 03, 2026
Patent 12567177
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 03, 2026
Patent 12566493
METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
94%
With Interview (+16.7%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 429 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month