Prosecution Insights
Last updated: April 19, 2026
Application No. 18/701,806

AI-ASSISTED CLINICIAN CONTOUR REVIEWING AND REVISION

Non-Final OA §102§103
Filed
Apr 16, 2024
Examiner
POTTS, RYAN PATRICK
Art Unit
2672
Tech Center
2600 — Communications
Assignee
The Board Of Regents Of The University Of Texas System
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
189 granted / 235 resolved
+18.4% vs TC avg
Strong +37% interview lift
Without
With
+36.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
29 currently pending
Career history
264
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 7 and 30 are objected to because of the following informalities: “one or more of the one or more” should be changed to “the one or more” for clarity. Appropriate correction is required. Claims 11, 14, 15, 17, 20, 34, 37, 38, 40 and 43 are objected to because of the following informalities: “the at least the portion” should be changed to “the portion” for clarity. Appropriate correction is required. Claims 18, 41 and 43 are objected to because of the following informalities: “receiving input from to the GUI the user” (claim 18), “receive input from to the GUI the user” (claim 41) and “receive input from to the GUI” (claim 43) should be changed to “receiving input to the GUI from the user” (claim 18), “receive input to the GUI from the user” (claim 41) and “receive input to the GUI from the user” (claim 43) for clarity. Appropriate correction is required. Claim 24 is objected to because of the following informalities: “upon a user determining that the initial contour requires revision, generate a revised contour by: receiving a first input from a user to the GUI to indicate a first point of revision” should be changed to “upon a user determining that the initial contour requires revision, generate a revised contour by: receiving a first input from the user to the GUI to indicate a first point of revision” for clarity and to avoid a rejection of claim 37 under 35 U.S.C. 112(b) for “the user” lacking antecedent basis (emphasis added). Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5-11, 14, 15, 17-26, 28-34, 37, 38 and 40-46 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Pat. Appl. Pub. No. 2020/0167930 to Wang et al. (hereinafter “Wang”). Regarding claim 1, Wang teaches a method for computer-assisted contour revision in medical image segmentation (Wang, par. 184, “DeeplGeoS provides a deep learning-based interactive framework for medical image segmentation.”), comprising: selecting an image slice (The same image slice is selected at each step of the method in Fig. 1 until the user accepts the segmentation, e.g., to receive indications at step 320, pixels of the input image are selected. In one instance, the selected image is the input image selected in step 100 or step 300. See Wang at par. 81, “At step 100 an input image is provided. The input image may be a two-dimensional (spatial) image ... The image may also be a three dimensional spatial image, for example, presented as a series of two dimensional image planes or slices.”) from one or more medical images of a patient (Wang, par. 85, “At step 130 the user checks the initial segmentation and either accepts the segmentation as correct, or proceeds to step 140 where the user provides some interactions (indications), e.g. clicks or scribbles, with the system to indicate mis-segmented regions.”), the image slice comprising an initial contour of a target anatomical structure in the one or more medical images (Wang, par. 84, “At step 110 a first (initial or proposal) segmentation network (P-Net hereafter) takes as input the provided image with CI channels and automatically produces (proposes) an initial, or first, segmentation, which is presented to the user at step 120.”); displaying (The initial contour is displayed in a GUI that is viewed by a user, who then provides mouse clicks as input to modify the contour. See Wang at par. 85, “clicks or scribbles”) at least a portion of the image slice (e.g., foreground) and the initial contour on a graphical user interface (GUI) (Wang, par. 135, “A Matlab graphical user interface (GUI) was developed for user interactions”; par. 170, “A PyQt GUI graphical user interface (GUI) was developed for user interactions”); and upon determining that the initial contour requires revision (Wang, Fig. 4, “Receiving, from a user, one or more indications 320”), generating a revised contour by: receiving a first input from a user to the GUI to indicate a first point of revision (Wang, par. 85, “step 140 where the user provides some interactions (indications), e.g. clicks or scribbles, with the system to indicate mis-segmented regions.”), inputting the one or more medical images, the first input, and the initial contour into a trained deep neural network that automatically extracts learned image characteristics (Wang, par. 86, “At step 150 a second (refinement) segmentation network (R-Net hereafter) uses the information of the original input image (as provided at step 100), the initial segmentation (as proposed at step 120) and the user interactions (as received at step 140) to provide a refined segmentation.”), processing the extracted learned image characteristics using one or more deep-learning segmentation algorithms of the trained deep neural network (Wang, par. 75, “The image-specific fine-tuning improves segmentation accuracy”), and automatically generating the revised contour using the processed extracted learned image characteristics (Wang, par. 189, “During a test stage, the bounding box is provided by the user, and the segmentation and the CNN are refined through unsupervised (no further user interactions) or supervised (with user-provided scribbles) image-specific fine-tuning.”). Regarding claim 2, Wang teaches the method of claim 1, wherein the one or more medical images comprise three-dimensional (3D) images (Wang, par. 81, “The image may also be a three dimensional spatial image, for example, presented as a series of two dimensional image planes or slices”; par. 169, “FIG. 18 shows an example structure for a first 3D segmentation network, 3D P-Net, which can be regarded as an extension of, and analogous to, the 2D P-Net 110 described above (see for example FIG. 1).”; The segmentation method of Fig. 1 applies to both 2D and 3D input images.). Regarding claim 3, Wang teaches the method of claim 1, wherein the image slice comprises a two-dimensional (2D) image (Wang, par. 115, “a 2D image”). Regarding claim 5, Wang teaches the method of claim 1, further comprising: inputting the one or more medical images into the trained deep neural network (Wang, par. 114, “P-Net”); and automatically generating the initial contour using the one or more deep-learning segmentation algorithms to process the extracted learned image characteristics (P-Net is initially trained to process extracted image characteristics (features). The characteristics are used to provide the initial contour to the user. Based on the user’s input, R-Net is used to refine P-Net and the segmented boundary (contour) of the foreground object using the new extracted characteristics. See Wang at par. 189, “During a training stage, the CNN is trained to segment particular objects within a bounding box.”). Regarding claim 6, Wang teaches the method of claim 5, further comprising: automatically generating one or more of an uncertainty value and a quality value for the initial contour (Wang, par. 194, “pixels with low confidence or close to a user indication may mislead the update of P-Net. Thus, we penalize such pixels and set the loss function”). Regarding claim 7, Wang teaches the method of claim 6, wherein the selecting the image slice (Wang, par. 81, “a series of two dimensional image planes or slices.”) is done automatically based on, at least, one or more of the uncertainty value (The same image slice is selected at each step of the method in Fig. 1 until the user accepts the segmentation, e.g., to receive indications at step 320, pixels of the input image are selected. If the user does not accept the second segmentation, i.e., step 130 or 350, then steps 320, 330, and 340 are repeated, each of which includes a selection of the slice image (input image) by obtaining values therefrom.). Regarding claim 8, Wang teaches the method of claim 6, further comprising: displaying the one or more of the uncertainty value and the quality value on the GUI (Wang, par. 248, “FIG. 40b showing the voxel-level segmentation accuracy by thresholding the uncertainties (the shaded area in FIG. 40b represents the standard errors.”; par. 250, “It is clear from FIG. 41 that the uncertainties near the boundaries of different structures are relatively higher than the other regions. Note that displaying the segmentation uncertainty can be useful to a user, for example, in terms of the most effective locations for providing manual indications (e.g. clicks or scribbles) of the correct segmentation.”). Regarding claim 9, Wang teaches the method of claim 1, further comprising: displaying the one or more medical images on the GUI (Wang at pars. 84 and 86, “an initial, or first segmentation ... is presented to the user at step 120” and “[at] step 160 the second (refined) segmentation is presented to the user, and the method then returns to step 130.”; See also claim 4.); and receiving user input to the GUI to generate the initial contour (Wang, par. 46, “FIG. 26 shows segmentation of the placenta, fetal brain, fetal lungs and maternal kidneys with a user-provided bounding box obtained using different segmentation systems, including that shown in FIG. 24.”; par. 106, “The method starts at step 300 with an input image being provided. In some cases, input image may be a region of a larger image, with the user selecting the relevant region, such as by using a bounding box to denote the region of the larger image to use as the input image.”). Regarding claim 10, Wang teaches the method of claim 1, wherein the selecting the image slice is done manually via user input to the GUI (Wang, par. 106, “In some cases, input image may be a region of a larger image, with the user selecting the relevant region, such as by using a bounding box to denote the region of the larger image to use as the input image.”). Regarding claim 11, Wang teaches the method of claim 1, wherein the first input comprises one or more of a single mouse click (Wang, par. 85, “step 140 where the user provides some interactions (indications), e.g. clicks or scribbles, with the system to indicate mis-segmented regions.”) and a touch input to the GUI on a selected point of the at least the portion of the image slice (Wang, par. 95, “The indications may be entered by using a mouse, touch-screen, or other suitable input device.”). Regarding claim 14, Wang teaches the method of claim 1, further comprising: displaying the at least the portion of the image slice and the revised contour (Wang, par. 109, “At step 340 a second segmentation is generated using a second machine learning system”) on the GUI (The method of Fig. 4 is described in the context of a user interaction. The interaction is through the Matlab GUI. See Wang at par. 135); and upon determining that the revised contour requires further revision (Wang, Fig. 1, step 130: no), generating a second revised contour by: receiving a second input from the user to the GUI to indicate a second point of revision (A second performance of step 140. See Wang at Fig. 1), inputting the one or more medical images (Wang, par. 109, “The second machine learning system receives as input the original input image”), the first input (Each input (click) is represented as a distance map. See Wang at pars. 97-98, “It will now be described how the indications are used to produce a refined, more accurate segmentation from the indications ... each indication corresponds to a particular segment, and the indication denotes an image location (or locations) that the user specifies as belonging to that segment ... In the distance map, the value at each image location corresponds to the distance between a respective pixel of the image at that location, and the closest user indication of that label type.”), the second input (A second input (click) is represented by a second distance map. As new distance maps are created from new mouse clicks, the distance maps are concatenated together, thereby refining the contours along the boundary of the segmented object. See Wang at par. 109, “each of the geodesic distance maps corresponding to respective segments ... may all be combined together using channel concatenation, and then generates the refined segmentation as output.”), and the revised contour (Wang, par. 86, “At step 150 a second (refinement) segmentation network (R-Net hereafter) uses ... the user interactions (as received at step 140) to provide a refined segmentation.”) into the trained deep neural network that automatically extracts learned image characteristics (Wang, par. 86, “At step 150 a second (refinement) segmentation network (R-Net hereafter) uses the information of the original input image (as provided at step 100), the initial segmentation (as proposed at step 120) and the user interactions (as received at step 140) to provide a refined segmentation.”), processing the extracted learned image characteristics using the one or more deep-learning segmentation algorithms of the trained deep neural network (Based on the user’s input at each cycle of steps 130, 140, 150 and 160, R-Net is used to refine P-Net using the new extracted characteristics. See Wang at par. 133, “After training of the P-Net with CRF-Net(f), user interactions were automatically simulated to train R-Net with CRF-Net(fu) (manual training could also be used instead of or in conjunction with the automatic training).”), and automatically generating the second revised contour using the processed extracted learned image characteristics (Wang, par. 86, “At step 160 the second (refined) segmentation is presented to the user, and the method then returns to step 130. At this point, the user either accepts the refined segmentation or provides more interactions to refine the result at least one more time through R-Net, in effect, having another cycle (iteration) through steps 140, 150 and 160 to 130. Once the user accepts the refined segmentation at step 130, the method proceeds to step 170 which finalises the segmentation.”). Regarding claim 15, Wang teaches the method of claim 14, wherein the second input comprises one or more of a single mouse click and a touch input to the GUI on a selected point of the at least the portion of the image slice (Wang, par. 95, “Each indication denotes (defines) one or more locations (pixels) within the image that belong to a particular (corresponding) segment. The indications may be entered by using a mouse, touch-screen, or other suitable input device. In some cases, the indications might be provided by clicking on the relevant locations, the locations of the clicks then being suitably displayed on the segmented image, e.g. by crosses or points (coloured as appropriate).”). Regarding claim 17, Wang teaches the method of claim 14, further comprising: displaying the at least the portion of the image slice and the second revised contour on the GUI (Wang, par. 86, “At step 160 the second (refined) segmentation is presented to the user”). Regarding claim 18, Wang teaches the method of claim 17, further comprising: receiving input [from the] GUI the user accepting the second revised contour (Wang, par. 86, “Once the user accepts the refined segmentation at step 130, the method proceeds to step 170 which finalises the segmentation.”). Regarding claim 19, Wang teaches the method of claim 17, further comprising: upon determining that the second revised contour requires further revision, repeating the generating and displaying steps (Wang, par. 86, “At step 160 the second (refined) segmentation is presented to the user, and the method then returns to step 130. At this point, the user either accepts the refined segmentation or provides more interactions to refine the result at least one more time through R-Net, in effect, having another cycle (iteration) through steps 140, 150 and 160 to 130.”). Regarding claim 20, Wang teaches the method of claim 1, further comprising: displaying the at least the portion of the at least the portion of the image slice and the revised contour on the GUI (Wang, par. 86, “At step 160 the second (refined) segmentation is presented to the user”); and receiving input to the GUI accepting the revised contour (Wang, par. 86, “Once the user accepts the refined segmentation at step 130, the method proceeds to step 170 which finalises the segmentation.”). Regarding claim 21, Wang teaches the method of claim 1, further comprising: propagating one or more additional initial contours in one or more additional image slices using the one or more deep-learning segmentation algorithms based on the revised contour (The method of Fig. 1 of Wang repeats continuously until the user accepts the contour.). Regarding claim 22, Wang teaches the method of claim 1, further comprising: updating the one or more deep-learning segmentation algorithms based the generating the revised contour (Wang, par. 86, “At step 150 a second (refinement) segmentation network (R-Net hereafter) uses the information of the original input image (as provided at step 100), the initial segmentation (as proposed at step 120) and the user interactions (as received at step 140) to provide a refined segmentation.”). Regarding claim 23, Wang teaches the method of claim 22, wherein the updating is done at predetermined time intervals (Wang, par. 133, “user interactions were automatically simulated to train R-Net with CRF-Net(fu)”). Claims 24-26, 28-34, 37, 38 and 40-46 substantially correspond to claims 1-3, 5-11, 14, 15 and 17-23 by reciting a system for computer-assisted contour revision in medical image segmentation, comprising: a processor (Wang, par. 170, “The training process was performed using a system with two 8-core E5-2623v3 Intel processors having a Haswell architecture, two K80 NVIDIA GPUs, and 128 GB memory. The testing process with user interactions was performed on a MacBook Pro (OSX 10.9.5) with 16 GB RAM and an Intel Core i7 CPU running at 2.5 GHz and an NVIDIA GeForce GT 750M GPU.”); and a memory (Wang, par. 170, “The training process was performed using a system with ...128 GB memory. The testing process with user interactions was performed on a MacBook Pro (OSX 10.9.5) with 16 GB RAM”) operatively coupled to the processor and configured to store computer-readable instructions (Wang, par. 288, “The computer system 710 includes at least one processor 740 and memory 750 for storing program instructions for execution by the processor 740.”) that, when executed by the processor, cause the processor to implement the methods of claims 1-3, 5-11, 14, 15 and 17-23. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed inventions absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of U.S. Pat. Appl. Pub. No. 20130272587 to Fang et al. (hereinafter “Fang”). Regarding claim 4, Wang teaches the method of claim 1 (Wang, par. 289, “may be implemented as a distributed system, for example, with some processing performed locally, and other processing performed remotely (e.g. in the cloud).”), but does not teach that which is explicitly taught by Fang. Fang teaches wherein the initial contour is generated by an external system (Fang, par. 6, “The processing hardware of the remote computer or server segments the reconstructed image according to an interactive image segmentation algorithm. Upon completion of the segmentation procedure, the results are transmitted back to the mobile device via either the same communication channel or through a different communication channel. The received segmentation results may then be refined at a mobile device based on an initial segment transmitted by the remote computer or server. The final, refined segmentation image may, in turn, then be displayed or provided to a user via a display or output of the mobile computing device. For example, the final refined segmentation image may be displayed via a capacitive touchscreen integral to a smartphone.”). Wang discloses interactive segmentation of a medical image in a distributed computing system. See Wang at par. 289. Thus, Wang shows that it was known in the art before the effective filing date of the claimed invention to divide the interactive segmentation algorithm between local and remote computing resources, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, reducing the amount of labor needed to produce an accurate segmentation of an image object. Fang discloses interactive segmentation in a distributed computing system where the segmentation results are computed using hardware of a remote computer and the results are transmitted to a user’s mobile device where they may be refined. Thus, Fang shows that it was known in the art before the effective filing date of the claimed invention to generate segmentation results using an external system, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, reducing the amount of labor needed to produce an accurate segmentation of an image object. A person of ordinary skill in the art would have been motivated to modify the distributed computing system disclosed by Wang to have the specific allocation disclosed by Fang to thereby apply the trained deep learning segmentation model using an external computer or set of computers (e.g., the cloud) and present the results to a mobile device operated by a user where further inputs may be provided to refine the segmentation results. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of reducing the amount of computational resources required of a user. Claim 27 substantially corresponds to claim 4 by reciting a system for computer-assisted contour revision in medical image segmentation, comprising: a processor (Wang, par. 170, “two 8-core E5-2623v3 Intel processors ... and an Intel Core i7 CPU running at 2.5 GHz”); and a memory (Wang, par. 170, “128 GB memory ... 16 GB RAM”) operatively coupled to the processor and configured to store computer-readable instructions (Wang, par. 288, “The computer system 710 includes at least one processor 740 and memory 750 for storing program instructions for execution by the processor 740.”) that, when executed by the processor, cause the processor to implement the method of claim 4. The rationale for obviousness is the same as provided for claim 4. Claims 12, 13, 16, 35, 36 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Iteratively Trained Interactive Segmentation to Mahadevan et al. (hereinafter “Mahadevan”). Regarding claim 12, Wang teaches the method of claim 11, further comprising: converting the one or more of the single mouse click and touch input into a distance map (See Wang at par. 98), but does not teach that which is explicitly taught by Mahadevan. Mahadevan teaches converting a single mouse click into a 2D image by placing a 2D Gaussian point around the selected point (Mahadevan, pg. 6, section 4.2, “The clicks are encoded as Gaussians with a standard deviation of 10 pixels that are centred on each click.”). Wang discloses interactive segmentation of a medical image where a user provides input via a mouse click or touch input to indicate a region of the image requiring refinement of segmented boundaries generated by a trained deep learning model. Thus, Wang shows that it was known in the art before the effective filing date of the claimed invention to repeatedly refine an object contour using mouse clicks as an indication of an image area where the user intends to refine a contour, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, reducing the amount of labor needed to produce an accurate segmentation of an image object. Mahadevan discloses a deep learning based interactive object segmentation algorithm by encoding clicks as Gaussians centered on each click at a radius of 10 pixels and additively combining clicks by generating subsequent clicks with respect to the mask predicted by a network at the previous iteration. Thus, Mahadevan shows that it was known in the art before the effective filing date of the claimed invention to use Gaussians instead of distances to represent the user’s intention from a mouse click, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, reducing the amount of labor needed to produce an accurate segmentation of an image object. A person of ordinary skill in the art would have been motivated to replace the distance map disclosed by Wang with a Gaussian centered around a pixel (mouse click or touch input) at a radius of 10 pixels as disclosed by Mahadevan to thereby represent the user’s indication (mouse click or touch input) as said Gaussian and iteratively refine the segmentation by providing a new input Gaussian centered at a radius of 10 pixels around the selected pixel. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of reducing the number of clicks as suggested by Mahadevan. Regarding claim 13, Wang in view of Mahadevan teaches the method of claim 12, wherein the 2D Gaussian point has a radius of approximately 10 pixels (Mahadevan, pg. 6, section 4.2, “The clicks are encoded as Gaussians with a standard deviation of 10 pixels that are centred on each click.”). The rationale for obviousness is the same as provided for claim 12. Regarding claim 16, Wang teaches the method of claim 15, further comprising: converting the one or more of the single mouse click and touch input into a distance map (See Wang at par. 98), but does not teach that which is explicitly taught by Mahadevan. Mahadevan teaches converting a single mouse click into a 2D image by placing a 2D Gaussian point around the selected point (Mahadevan, pg. 6, section 4.2, “The clicks are encoded as Gaussians with a standard deviation of 10 pixels that are centred on each click.”). The rationale for obviousness is the same as provided for claim 12. Claims 35, 36 and 39 substantially correspond to claims 12, 13 and 16 by reciting a system for computer-assisted contour revision in medical image segmentation, comprising: a processor (Wang, par. 170, “two 8-core E5-2623v3 Intel processors ... and an Intel Core i7 CPU running at 2.5 GHz”); and a memory (Wang, par. 170, “128 GB memory ... 16 GB RAM”) operatively coupled to the processor and configured to store computer-readable instructions (Wang, par. 288, “The computer system 710 includes at least one processor 740 and memory 750 for storing program instructions for execution by the processor 740.”) that, when executed by the processor, cause the processor to implement the methods of claims 12, 13 and 16. The rationale for obviousness is the same as provided for claim 12. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: DeeplGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation and Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning are considered pertinent because they are authored by Guotai Wang, i.e., Wang (US20200167930A1), and provide supplemental material to better understand Wang’s DeeplGeoS and the state of the art at the time Wang’s application was effectively filed. Reviving Iterative Training with Mask Guidance for Interactive Segmentation to Sofiiuk et al. discloses “Encoding the clicks via a distance transform from Xu et al. [1] is the most common approach to clicks encoding. However, they can also be represented by gaussians or disks with a fixed radius.” (pg. 4). This is pertinent because the specification describes embodiments that use gaussians to expand the area selected by a mouse click and Sofiiuk et al. disclose that distance transforms and gaussians are two alternatives of expanding the area selected by a mouse click in the context of interactive segmentation. GB2592693A is pertinent to “receiving user input to the GUI to generate the initial contour” in claim 9 for similarly reciting in the first paragraph of page 23, “[the] user selected initial image slice 601 has an associated contour 604 describing a structure of interest. The user target image slice 603 is the image slice the user wants to contour and has no associated contour (illustrated by 606).” Scale-Aware Multi-Level Guidance for Interactive Instance Segmentation is pertinent because it describes a benefit of using gaussians for guidance instead of distance: “One observation made in [1, 11, 12] was that encoding the clicks as Gaussians led to some performance improvement because it localizes the clicks better [11] and can encode both positive and negative click in a single channel [1]” (pg. 3). Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN P POTTS whose telephone number is (571)272-6351. The examiner can normally be reached M-F, 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN P POTTS/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Apr 16, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591966
METHOD AND APPARATUS FOR ANALYZING BLOOD VESSEL BASED ON MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12560734
METHOD AND SYSTEM FOR PROCESSING SEISMIC IMAGES TO OBTAIN A REFERENCE RGT SURFACE OF A GEOLOGICAL FORMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555259
PRODUCT IDENTIFICATION APPARATUS, PRODUCT IDENTIFICATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12548658
Systems and Methods for Scalable Mapping of Brain Dynamics
2y 5m to grant Granted Feb 10, 2026
Patent 12538743
WARPAGE AMOUNT ESTIMATION APPARATUS AND WARPAGE AMOUNT ESTIMATION METHOD
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+36.8%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month