Prosecution Insights
Last updated: April 19, 2026
Application No. 18/921,748

METHOD AND SYSTEM FOR TRAINING AND DEPLOYING AN ARTIFICIAL INTELLIGENCE MODEL ON PRE-SCAN CONVERTED ULTRASOUND IMAGE DATA

Non-Final OA §102§103§DP
Filed
Oct 21, 2024
Examiner
CELESTINE, NYROBI I
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Clarius Mobile Health Corp.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
214 granted / 262 resolved
+11.7% vs TC avg
Strong +23% interview lift
Without
With
+22.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
43 currently pending
Career history
305
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
30.4%
-9.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 262 resolved cases

Office Action

§102 §103 §DP
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/24/2024 has been considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: In claim 1, line 6, “AI” should be artificial intelligence (AI)” for clarity. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,124,538 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the pending claims are an obvious variant of the claim set from the patent only including minor differences in structure. Claims 1 and 18 of the instant invention and claims 1 (see col. 20, lines 34-36) and 15 (see col. 21, lines 56-59) of US patent ‘538 similarly recite acquiring a raw ultrasound data frame, which is organized using raw data coordinates, using a non-invasive ultrasound scanner. Claims 1 and 18 of the instant invention and claims 1 (see col. 20, lines 40-44) and 15 (see col. 22, lines 7-9) of US patent ‘538 similarly recite deploying an artificial intelligence model to execute on a computing device, communicably connected to the non-invasive ultrasound scanner, wherein the AI model is trained to predict a presence of an anatomical feature imaged in the raw ultrasound data frame. Claims 1 and 18 of the instant invention and claims 1 (see col. 20, lines 45-48) and 15 (see col. 22, lines 3-5) of US patent ‘538 similarly recite processing, by the computing device, the raw ultrasound data frame against the AI model to predict a presence of an anatomical feature imaged in the raw ultrasound data frame. Claims 1 (see col. 20, lines 48-59) and 15 (see col. 22, lines 6-20) of US patent ‘538 further specifies the acquiring step and the processing step, and includes steps of determining raw data coordinates of the predicted anatomical feature imaged in the raw ultrasound data frame; scan converting the raw ultrasound data frame to an ultrasound image that is suitable for rendering on a display device; transforming the raw data coordinates of the predicted anatomical feature to x-y coordinates in the ultrasound image; and displaying the ultrasound image segmented with a representation of the x-y coordinates of the predicted anatomical feature. Claims 2 and 19 of the instant invention and claims 1 (see col. 20, lines 48-50) and 15 (see col. 22, lines 6-8) of US patent ‘538 similarly recite processing, by the computing device, the raw ultrasound data frame against the AI model to determine raw data coordinates of the predicted anatomical feature imaged in the raw ultrasound data frame. Claims 3 and 20 of the instant invention and claims 1 (see col. 20, lines 51-56) and 15 (see col. 22, lines 12-17) of US patent ‘538 similarly recite scan converting the raw ultrasound data frame to an ultrasound image that is suitable for rendering on a display device, and transforming the raw data coordinates of the predicted anatomical feature to x-y coordinates in the ultrasound image. Claim 4 of the instant invention and claims 1 (see col. 20, lines 57-59) and 15 (see col. 22, lines 18-20) of US patent ‘538 similarly recite displaying the ultrasound image segmented with a representation of the x-y coordinates of the predicted anatomical feature. Claim 5 of the instant invention and claims 2 (see col. 20, lines 60-62) and 15 (see col. 21, lines 56-57 and col. 22, lines 3-4) of US patent ‘538 similarly recite wherein the processing of the raw ultrasound data frame against the AI model is performed by the non-invasive ultrasound scanner. Claim 6 of the instant invention and claims 3 (see col. 20, lines 63-65) and 15 (see col. 22, lines 10-20) of US patent ‘538 similarly recite wherein the scan converting, the transforming and the displaying are performed by a display device. Claim 7 of the instant invention and claims 4 (see col. 20, line 66 to col. 21, line 13) and 16 (see col. 22, lines 22-36) of US patent ‘538 similarly recite before the acquiring step: receiving, by a processor, a training ultrasound image and an identification of an example of the anatomical feature in the training ultrasound image, wherein the training ultrasound image is converted from a raw ultrasound training data frame; transforming, by the processor, x-y coordinates of the example of the anatomical feature identified in the training ultrasound image to raw training data coordinates in a coordinate system of the raw ultrasound training data frame; and training, by the processor, the AI model or a duplicate thereof with the raw training data coordinates and the raw ultrasound training data frame. Claim 8 of the instant invention and claim 5 (see col. 21, lines 15-20) of US patent ‘538 similarly recite the training ultrasound image is one of a set of training ultrasound images; and the receiving, transforming and training steps are performed for each of the training ultrasound images in the set of training ultrasound images. Claim 9 of the instant invention and claims 6 (see col. 21, lines 20-25) and 17 (see col. 22, lines 38-42) of US patent ‘538 similarly recite prior to the receiving step: displaying the training ultrasound image; and receiving input identifying the anatomical feature on the training ultrasound image. Claim 10 of the instant invention and claim 7 (see col. 21, lines 25-27) of US patent ‘538 similarly recite wherein the raw data coordinates and the raw training data coordinates are polar coordinates. Claim 11 of the instant invention and claim 8 (see col. 21, lines 29-32) of US patent ‘538 similarly recite wherein the x-y coordinates of the example of the anatomical feature form a mask, and the x-y coordinates of the predicted anatomical feature form another mask. Claim 12 of the instant invention and claim 9 (see col. 21, lines 33-35) of US patent ‘538 similarly recite wherein the input identifying the example of the anatomical feature is a tracing around all or part of the anatomical feature. Claim 13 of the instant invention and claims 10 (see col. 21, line 35-37) and 18 (see col. 22, lines 43-45) of US patent ‘538 similarly recite wherein the raw ultrasound data frame and the raw ultrasound training data frame have identical pixel array dimensions. Claim 14 of the instant invention and claims 11 (see col. 21, lines 39-45) and 19 (see col. 22, lines 48-54) of US patent ‘538 similarly recite reducing a resolution of the raw ultrasound training data frame and a resolution of the raw training data coordinates of the example of the anatomical feature identified in the training ultrasound image before training the AI model; and reducing a resolution of the raw ultrasound data frame before processing with the AI model. Claim 15 of the instant invention and claims 12 (see col. 21, lines 46-48) and 20 (see col. 22, lines 55-58) of US patent ‘538 similarly recite wherein the AI model comprises a segmentation function, a classification function or both a segmentation function and a classification function. Claim 16 of the instant invention and claim 13 (see col. 21, lines 49-50) of US patent ‘538 similarly recite wherein the training step comprises supervised learning. Claim 17 of the instant invention and claim 14 (see col. 21, lines 51-53) of US patent ‘538 similarly recite wherein the anatomical feature is an organ, a portion of an organ, a boundary of an organ, a bodily fluid, a tumor, a cyst, a fracture, or a break. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Silberman et al. (US 20200054307 A1, February 20, 2020), hereinafter referred to as Silberman. Regarding claim 1, and similarly for claim 18, a method for displaying a predicted anatomical feature in an ultrasound image comprising: acquiring a raw ultrasound data frame, which is organized using raw data coordinates, using a non-invasive ultrasound scanner (Fig. 1; see para. 0039 – “In act 102, the processing device receives first ultrasound data collected from a subject by the ultrasound device [ultrasound scanner]. The processing device may receive the first ultrasound data in real-time, and the ultrasound data may therefore be collected from the current anatomical location of the ultrasound device on the subject being imaged… The first ultrasound data may include, for example, raw acoustical data, scan lines generated from raw acoustical data…”; see para. 0096 – “Thereby, an operator of the ultrasound imaging device 1614 may be able to operate the ultrasound imaging device 1614 with one hand [non-invasive ultrasound scanner] and hold the processing device 1602 with another hand.”); deploying an artificial intelligence model to execute on a computing device, communicably connected to the non-invasive ultrasound scanner, wherein the AI model is trained to predict a presence of an anatomical feature imaged in the raw ultrasound data frame (see para. 0040 – “In some embodiments, to determine the first anatomical location, the processing device may input the first ultrasound [raw] data to a statistical model. The statistical model may be a convolutional neural network or other deep learning model, a random forest, a support vector machine, a linear classifier, and/or any other statistical model [AI model].”); and processing, by the computing device, the raw ultrasound data frame against the AI model to predict a presence of an anatomical feature imaged in the raw ultrasound data frame (see para. 0040 – “In some embodiments, to determine the first anatomical location [presence of anatomical feature], the processing device may input the first ultrasound [raw] data to a statistical model [AI model].”). Furthermore, regarding claims 2 and 19, Silberman further teaches additionally comprising a step of processing, by the computing device, the raw ultrasound data frame against the AI model to determine raw data coordinates of the predicted anatomical feature imaged in the raw ultrasound data frame (see para. 0040 – “In some embodiments, to determine the first anatomical location [coordinates], the processing device may input the first ultrasound [raw] data to a statistical model [AI model].”). Furthermore, regarding claims 3 and 20, Silberman further teaches scan converting the raw ultrasound data frame to an ultrasound image that is suitable for rendering on a display device and transforming the raw data coordinates of the predicted anatomical feature to x-y coordinates in the ultrasound image (see para. 0039 – “In some embodiments, the ultrasound device may generate scan lines from the raw acoustical data, transmit the scan lines to the processing device, and the processing device may generate ultrasound images from the scan lines.” Where transforming raw ultrasound data into an ultrasound image for display (scan conversion) is known in the art). Furthermore, regarding claim 4, Silberman further teaches displaying the ultrasound image segmented with a representation of the x-y coordinates of the predicted anatomical feature (see para. 0039 – “In some embodiments, the ultrasound device may generate scan lines from the raw acoustical data, transmit the scan lines to the processing device, and the processing device may generate ultrasound images from the scan lines.” Where transforming raw ultrasound data into an ultrasound image for display (scan conversion) is known in the art). Furthermore, regarding claim 5, Silberman further teaches wherein the processing of the raw ultrasound data frame against the AI model is performed by the non-invasive ultrasound scanner (see para. 0040 – “In some embodiments, to determine the first anatomical location [feature], the processing device may input the first ultrasound [raw] data to a statistical model [AI model].”). Furthermore, regarding claim 6, Silberman further teaches wherein the scan converting, the transforming and the displaying are performed by a display device (see para. 0039 – “In some embodiments, the ultrasound device may generate scan lines from the raw acoustical data, transmit the scan lines to the processing device, and the processing device may generate ultrasound images from the scan lines.” Transforming raw ultrasound data into an ultrasound image for display (aka scan conversion) is known in the art). Furthermore, regarding claim 17, Silberman further teaches wherein the anatomical feature is an organ, a portion of an organ, a boundary of an organ, a bodily fluid, a tumor, a cyst, a fracture, or a break (see para. 0040 – “To train the statistical model, ultrasound data labeled with the anatomical location on the subject where the ultrasound data was collected may be inputted to the statistical model and used to modulate internal parameters of the statistical model. The first anatomical location may be, for example, an anatomical region (e.g., the anterior superior region of the right lung) or an anatomical structure (e.g., the heart) [organ].”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 7-9, 12-13, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Silberman in view of Kim et al. (US 20200043602 A1, published February 6, 2020), hereinafter referred to as Kim. Regarding claim 7, Silberman teaches all of the elements disclosed in claim 1 above. Silberman teaches training the AI model with raw ultrasound data (see para. 0100 – “A neural network may be trained using, for example, labeled training data. The labeled training data may include a set of example inputs and an answer associated with each input. For example, the training data may include a plurality of ultrasound images or sets of raw acoustical data that are each labeled with an anatomical feature that is contained in the respective ultrasound image or set of raw acoustical data.”), but does not explicitly teach training the AI model with raw ultrasound data, where the raw ultrasound data was transformed from a training ultrasound image. Whereas, Kim, in the same field of endeavor, teaches before the acquiring step: receiving, by a processor, a training ultrasound image and an identification of an example of the anatomical feature in the training ultrasound image, wherein the training ultrasound image is converted from a raw ultrasound training data frame (Fig. 2, scan-formatted ultrasound images 202 as training ultrasound images; see para. 0040 — “The scan-formatted ultrasound images 202 may correspond to the scan-formatted images generated by the processing component 134 and stored as a training data set 140 in the memory 138."); transforming, by the processor, x-y coordinates of the example of the anatomical feature identified in the training ultrasound image to raw training data coordinates in a coordinate system of the raw ultrasound training data frame (Fig. 2, reverse scan conversion 210 as transforming ultrasound image to raw ultrasound training data; see para. 0040 — “At the reverse scan conversion stage 210, scanformatted [x-y] ultrasound images 202 are converted into pre-scan-formatted [raw] ultrasound images 204."); and training, by the processor, the AI model or a duplicate thereof with the raw training data coordinates and the raw ultrasound training data frame (Fig. 2, deep learning training 220 with raw ultrasound training data from reverse scan conversion 210 of scan-formatted ultrasound images 202 (training ultrasound image); see para. 0039 – “The reverse scan conversion stage 210 formats ultrasound images acquired using different ultrasound probes (e.g., linear probes, curvilinear probes, and phased-array probes) into a common image format and/or dimensions suitable for training deep learning networks. The deep learning network stage 220 trains deep learning networks to classify ultrasound images into clinical feature categories suitable for clinical assessments, for example, using training data output by the reverse scan conversion stage 210.”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified training the AI model with raw ultrasound data, as disclosed in Silberman, by having the raw ultrasound data transformed from a training ultrasound image, as disclosed in Kim. One of ordinary skill in the art would have been motivated to make this modification in order to remove image format differences or variations and decrease impact or bias the training of deep learning networks for clinical feature classifications, as taught in Kim (see para. 0040). Furthermore, regarding claim 8, Kim further teaches wherein: the training ultrasound image is one of a set of training ultrasound images (Fig. 2, one of scanformatted ultrasound images 202); and the receiving, transforming and training steps are performed for each of the training ultrasound images in the set of training ultrasound images (Fig. 2, deep learning training 220 with raw ultrasound training data from reverse scan conversion 210 of scan-formatted ultrasound images 202 (training ultrasound image); see para. 0039 – “The reverse scan conversion stage 210 formats ultrasound images acquired using different ultrasound probes (e.g., linear probes, curvilinear probes, and phased-array probes) into a common image format and/or dimensions suitable for training deep learning networks. The deep learning network stage 220 trains deep learning networks to classify ultrasound images into clinical feature categories suitable for clinical assessments, for example, using training data output by the reverse scan conversion stage 210.”). Furthermore, regarding claim 9, Silberman further teaches prior to the receiving step: displaying the training ultrasound image; and receiving input identifying the anatomical feature on the training ultrasound image (see para. 0100 – “A neural network may be trained using, for example, labeled training data. The labeled training data may include a set of example inputs and an answer associated with each input. For example, the training data may include a plurality of ultrasound images or sets of raw acoustical data that are each labeled with an anatomical feature that is contained in the respective ultrasound image or set of raw acoustical data.”). Furthermore, regarding claim 12, Silberman further teaches wherein the input identifying the example of the anatomical feature is a tracing around all or part of the anatomical feature (see para. 0043 – “To train the statistical model, optical images of subjects with anatomical locations labeled on the images may be inputted to the statistical model and used to modulate internal parameters of the statistical model. For example, an image of a subject may be manually segmented [tracing] to delineate various anatomical locations [anatomical feature] (e.g., the superior anterior region of the right lung, the superior posterior region of the right lung, etc.).”). Furthermore, regarding claim 13, Kim further teaches wherein the raw ultrasound data frame and the raw ultrasound training data frame have identical pixel array dimensions (see para. 0039 – “The reverse scan conversion stage 210 formats ultrasound images acquired using different ultrasound probes (e.g., linear probes, curvilinear probes, and phased-array probes) into a common image format and/or dimensions suitable for training deep learning networks.”). Furthermore, regarding claim 15, Silberman further teaches wherein the AI model comprises a segmentation function, a classification function or both a segmentation function and a classification function (see para. 0043 – “To train the statistical model, optical images of subjects with anatomical locations labeled on the images may be inputted to the statistical model and used to modulate internal parameters of the statistical model. For example, an image of a subject may be manually segmented to delineate various anatomical locations (e.g., the superior anterior region of the right lung, the superior posterior region of the right lung, etc.).”; see para. 0098 – “The trained model may be used as, for example, a classifier that is configured to receive a data point as an input and provide an indication of a class to which the data point likely belongs as an output.”). Furthermore, regarding claim 16, Silberman further teaches wherein the training step comprises supervised learning (see para. 0100 – “A neural network may be trained using, for example, labeled training data [supervised learning]. The labeled training data may include a set of example inputs and an answer associated with each input. For example, the training data may include a plurality of ultrasound images or sets of raw acoustical data that are each labeled with an anatomical feature that is contained in the respective ultrasound image or set of raw acoustical data.”). The motivation for claims 8 and 13 was shown previously in claim 7. Claims 10-11 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Silberman in view of Kim, as applied to claim 7 above, and in further view of Li et al. (US 20200226422 A1, published July 16, 2020), hereinafter referred to as Li. Regarding claim 10, Silberman in view of Kim teaches all of the elements disclosed in claim 7 above. Silberman in view of Kim teaches raw training data, but does not explicitly teach where the raw training data is in polar coordinates. Whereas, Li, in the same field of endeavor, teaches wherein the raw data coordinates and the raw training data coordinates are polar coordinates (see para. 0118 — “In one embodiment, the training data including the raw images and the ground truth annotations are all in polar coordinates.”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified raw training data, as disclosed in Silberman in view of Kim, by having the raw training data in polar coordinates, as disclosed in Li. One of ordinary skill in the art would have been motivated to make this modification in order to avoid annotating and operating on Cartesian images to generate ground truth images/masks and training of the neural network, as taught in Li (see para. 0016). Furthermore, regarding claim 11, Li further teaches wherein the x-y coordinates of the example of the anatomical feature form a mask, and the x-y coordinates of the predicted anatomical feature form another mask (see para. 0103 – “FIG. 3C shows a series of labeled masks and an ROI mask in a Cartesian view of an artery.”; see para. 0105 – “In one embodiment, annotated masks regions corresponding to set or group of pixels define a ground truth mask [example mask] that are used to train one or more neural networks disclosed herein. Once the neural network is trained, predictive or detected masks [predicted mask] are generated that include sets of pixels that correspond to regions of user data as well as an identifier of the feature or class of the region, such as whether it is lumen, calcium, EEL, or another class or feature [anatomical features] disclosed herein.”). One of ordinary skill in the art would have been motivated to make this modification in order to display as an output image mask with regions corresponding to a particular class so indicated by an indicia such as color and one or more legends summarizing which indicia maps to which class, as taught in Li (see para. 0105). Furthermore, regarding claim 14, Li further teaches reducing a resolution of the raw ultrasound training data frame and a resolution of the raw training data coordinates of the example of the anatomical feature identified in the training ultrasound image before training the AI model; and reducing a resolution of the raw ultrasound data frame before processing with the AI model (see para. 0202 — “Similarly, in FIG. 11C, a half resolution resizing is performed such as by skipping A lines (scan lines) to generate the middle image. In turn, the depth pixels in the image are skipped or excluded to further resize the image to obtain the pre-processed image on the right size of FIG. 11C.”). One of ordinary skill in the art would have been motivated to make this modification in order to improve MLS (machine learning system) operation and can be performed in real time or substantially real time, and save time and improve patient outcomes, as taught in Li (see para. 0202). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Toporek et al. (US 20200315587 A1, published October 8, 2020) discloses collecting an initial training dataset including u ultrasound training cases, and where the collected training data set is automatically or manually labelled by expert users with vectors based upon the conditions or features identified in the ultrasound images. Lyman et al. (US 20200352518 A1, published November 12, 2020) discloses a model trained on imaging data for a training set of medical scans, where labeling data corresponding to each of the training set of medical scans includes an indication of whether or not an artifact is present, includes an indication that one or more of a plurality of types of artifacts that are present, includes an indication of a portion and/or pattern in the image data that corresponds to one or more artifacts, and/or includes other indications of artifacts present in the training data. Xiao et al. (US 20210042564 A1, published February 11, 2021) discloses an image with high resolution may be generated according to an image with low resolution, and is used to restore lost information in the image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nyrobi Celestine whose telephone number is 571-272-0129. The examiner can normally be reached on Monday - Thursday, 7:00AM - 5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached on 571-272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Nyrobi Celestine/Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Oct 21, 2024
Application Filed
Dec 01, 2025
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12478431
PROVIDING SURGICAL ASSISTANCE VIA AUTOMATIC TRACKING AND VISUAL FEEDBACK DURING SURGERY
2y 5m to grant Granted Nov 25, 2025
Patent 12478350
SYSTEM INCLUDING A VIBRATOR AND AN ULTRASOUND EMITTER FOR CHARACTERIZING TISSUE
2y 5m to grant Granted Nov 25, 2025
Patent 12478351
ULTRASOUND DEVICE WITH ELEVATIONAL BEAMFORMING
2y 5m to grant Granted Nov 25, 2025
Patent 12446863
METHODS AND DEVICES FOR SPLICING ULTRASOUND SIGNAL
2y 5m to grant Granted Oct 21, 2025
Patent 12440192
PATIENT INTERFACE MODULE (PIM) POWERED WITH WIRELESS CHARGING SYSTEM AND COMMUNICATING WITH SENSING DEVICE AND PROCESSING SYSTEM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.7%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 262 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month