Prosecution Insights
Last updated: April 19, 2026
Application No. 18/149,046

SYSTEMS AND METHODS FOR IMAGE PROCESSING

Non-Final OA §103
Filed
Dec 30, 2022
Examiner
CROCKETT, JOSHUA BRIGHAM
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Shanghai United Imaging Healthcare Co. Ltd.
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
13 granted / 18 resolved
+10.2% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18/149,046 (the instant application), filed on 12/30/2022. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 16 December 2025 has been entered. Response to Arguments Claims 1, 5, 7-11, 15, and 17-20 have been amended. Claims 4 and 14 have been canceled. Claims 21 and 22 have been added. Claims 1-3, 5-13, and 15-22 are pending in this action. Applicant’s arguments, see pg. 10-11, filed 16 December 2025, with respect to the rejection of claims 21-22 under 35 U.S.C. 112(a) have been fully considered and are persuasive. Specifically, the applicant presented evidence showing that claims 21 and 22 are supported by the disclosure of the specification. The rejection of claims 21 and 22 under 35 U.S.C. 112(a) have been withdrawn. Applicant’s arguments, see pg. 11-12, filed 16 December 2025, with respect to the rejection of claim 8 under 35 U.S.C. 112(b) have been fully considered and are persuasive. Specifically, claim 8 has been canceled, therefore, the rejection is moot. Applicant’s arguments, see pg. 12-19, filed 16 December 2025, with respect to the rejection of claims 1-3, 5-13, and 15-22 under 35 U.S.C. 103 have been fully considered and are persuasive. Specifically, the applicant argues that the applied prior art does not disclose expressly stage specific analysis as reflected by the claim language “wherein each stage of the plurality of stages is one or a T stage, an N stage, or an M stage,” and argues that the applied prior art does not disclose expressly stage-correlated models as reflected by the claim language “a segmentation model corresponding to the stage”. The examiner agrees. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Liu Yinglong (CN 114334132 A; hereafter, Liu) and Rosenman et al. (US 20240054650 A1; hereafter, Rosenman). Liu discloses: for each stage of a plurality of stages of a target disease ([0042] staging of cancer is considered. Fig. 1 and [0045], through the process of Liu, each stage of TNM staging is considered), wherein each stage of the plurality of stages is one or a T stage, an N stage, or an M stage ([0044]-[0045] the staging is T stage, N stage, and M stage), inputting a structural image of the subject into a segmentation model ([0046] the medical images are segmented using a segmentation model. [0048] the medical images may be CT image or MR images which are understood as a structural image) corresponding to the stage ([0043]-[0044], each stage among T-stage, N-stage, and M-stage are segmented. Therefore, a model performing that segmentation is understood as a model corresponding to the associated stage for each stage the model performs segmentation on), wherein the segmentation model is used for segmenting the one or more ROIs corresponding to the stage ([0046] the model segments each area including the primary tumor, regional lymph nodes, and organs at risk which are understood as according to the type of the ROI corresponding to the stage); Rosenman discloses: determining a type of one or more regions of interest (ROIs) corresponding to the stage (the examiner understands "determining a type" to include determining a region most likely to contain lesions of the relevant stage, see the applicant's specification [0086]. Rosenman [0029] and Fig. 3, "tumor location by anatomy tool 302-F" may be understood as determining the type by determining regions relevant to the stage. Each stage is considered as follows: Fig. 1 and [0042], the locator tool is used for tumor location, i.e. type, in step 106; Fig. 1 and [0067], the locator tool is used for N-stage location, i.e. type, in step 108; Fig. 1 and [0078] and [0080], the locator tool is used for M-stage location, i.e. type, in step 110); The full rejection, including motivations to combine, is included below in the section “Claim Rejections - 35 USC § 103”. The new grounds of rejection are necessitated by the applicant’s amendments. Claim Interpretation Regarding the interpretation of the limitation in claim 1, “a segmentation model corresponding to the stage”, the applicant on pg. 14 of their remarks states, “The claimed invention, however, utilizes a dedicated segmentation model corresponding to each stage (i.e., one of the T stage, the N stage, or the M stage). This means distinct, stage-specific segmentation models are provided and used to segment the ROIs relevant to that particular stage. This architecture, where the segmentation logic itself is dynamically selected based on each stage,” emphasis added. This interpretation is that there is a plurality of segmentation models, one model associated with each stage of the plurality of stages. The examiner finds that this interpretation is more narrow than the broadest reasonable interpretation. Referring to the applicant’s specification, para. [0095], the examiner found two embodiments related to the limitation, differentiated as follows by italicized text and underlined text, "In some embodiments, each stage of the target disease may correspond to a segmentation model. For example, for each stage, a segmentation model for obtaining a segmentation image of the one or more ROIs corresponding to the stage may be obtained. In some embodiments, two or more stages of the target disease may correspond to a same segmentation model. For example, for a lung cancer, a segmentation model for obtaining a segmentation image of the one or more ROIs corresponding to the TNM stages of the lung cancer may be obtained. A structural image of a patient having the lung cancer may be input into the segmentation model, and a segmentation image including the one or more ROIs of T stage, the one or more ROIs of N stage, and the one or more ROIs of M stage may be output by the segmentation model. Optionally, the structural image of the patient having the lung cancer and parameters relating to the T stage (e.g., text or numbers indicating the T stage) may be input into the segmentation model such that a T distribution image corresponding to the T stage may be obtained." see also [0096] and [0113] for further examples of the two embodiments. The embodiment shown by the italicized text supports the applicant’s interpretation, namely that of a plurality of segmentation models with each model corresponding to a stage. The embodiment shown by the underlined text is different in that all of the stages may be segmented by a single segmentation model. The act of segmenting a stage with a model may be said to cause that segmentation model to “correspond” to the stage. The examiner finds that both of these interpretations are valid interpretations under the current wording of the claims. If the applicant desires to narrow the interpretation of the claims to the interpretation they presented in their remarks, the examiner recommends amending the claims to recite segmenting using one of a plurality of segmentation models corresponding to the stage, the plurality of segmentation models corresponding to the stage comprising a model corresponding to T-stage, a model corresponding to N-stage, and a model corresponding to M-stage. The examiner leaves the details of the specific word choice of such a possible amendment to the applicant as the word choice would have significant impact on the scope of the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 7, 11, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publ. No. 20210093249; hereafter, Anand) in view of Liu Yinglong (CN 114334132 A; hereafter, Liu) in further view of Rosenman et al. (US 20240054650 A1; hereafter, Rosenman). Regarding claim 1, Anand discloses: A method for image processing, implemented on a computing device having at least one storage device storing a set of instructions, and at least one processor in communication with the at least one storage device ([0109] and Fig. 9, the computing device includes a processor, memory, and storage device. [0111] the storage device may store instructions that when executed by the processor perform the method), the method comprising: generating a first distribution image indicating the distribution of the one or more ROIs in a subject ([0086] an anatomical image of the whole body may be segmented to produce plural VOIs, i.e. regions of interest. The whole body is understood to reveal a distribution of one or more regions of interest. Note, the wording “corresponding to the stage” will be taught later in combination with the other references) by inputting a structural image of the subject into a segmentation model ([0086] the image is segmented by inputting it into a deep learning neural network which may be understood as a model) generating a second distribution image indicating the distribution of the one or more ROIs in the subject by processing the functional image based on the first distribution image ([0088] the VOIs in the structural image are mapped to a functional image of the same volume, i.e. based on the first distribution image. Fig. 2B see item 226 as a second distribution image by processing the functional image 222 and the first distribution image 224. Note, the wording “corresponding to the stage” will be taught later in combination with the other references); generating a lesion detection result of the subject ([0089] disease (prostate cancer) status of the subject is determined) based on the second distribution image ([0088] the VOIs in the structural image are mapped to a functional image of the same volume, i.e. functional image is the second distribution image. Note, the wording “corresponding to the stage” will be taught later in combination with the other references); and displaying, through a display device ([0057] the apparatus includes a display), wherein the determining a type of one or more ROIs includes: obtaining a staging criterion relating to the target disease ([0101] "Clinical T stage refers to a standardized code for reporting the progression of prostate cancer. Clinical T stage may also be referred to as TNM staging for Tumor, Nodule, Metastasis. TNM staging is described in further detail at https://www.cancer.gov/about-cancer/diagnosis-staging/staging." Therefore, the staging criterion was obtained from an outside source. A PDF of the webpage cited by Anand is provided as NPL with this action. Note, the wording “corresponding to the stage” will be taught later in combination with the other references); and determining the type of the one or more ROIs based on the staging criterion ([0092] " the output assigns a subject's prostate cancer to one of several classes . . . For example, three classes—no metastases, N-stage, and M-stage can be used." No metastases is understood as a T-stage. See also [0035] and [0036] for use of the TNM code as examples of determining type from a criterion), wherein the staging criterion includes a TNM staging criterion ([0092] " the output assigns a subject's prostate cancer to one of several classes . . . For example, three classes—no metastases, N-stage, and M-stage can be used." No metastases is understood as the T-stage), the type of the one or more ROIs includes at least one of: a local region corresponding to T stage ([0092] " the output assigns a subject's prostate cancer to one of several classes . . . For example, three classes—no metastases,", no metastases is understood as a T stage), an adjacent region corresponding to N stage ([0092] "a more detailed classification of metastases can performed that differentiates between N-stage (indicating metastases to lymph nodes)"), or a distant region corresponding to M stage ([0092] "For example, a more detailed classification of metastases can performed that differentiates between . . . M-stage (indicating metastases in regions other than the lymph nodes) metastases."). Anand does not disclose expressly for each stage of a plurality of stages of a target disease, wherein each stage is one of a TNM stage, generating a first distribution image corresponding to the stage, inputting a structural image into a segmentation model corresponding to the stage, and generating a lesion detection result based on the second distribution image corresponding to the stage. Liu discloses: for each stage of a plurality of stages of a target disease ([0042] staging of cancer is considered. Fig. 1 and [0045], through the process of Liu, each stage of TNM staging is considered), wherein each stage of the plurality of stages is one or a T stage, an N stage, or an M stage ([0044]-[0045] the staging is T stage, N stage, and M stage), generating a first distribution image indicating the distribution of the one or more ROIs corresponding to the stage in a subject ([0043]-[0044] S100 and S200, medical images are segmented for each stage including T-stage, N-stage, M-stage, see also [0046]. The segmentation is understood as a first distribution image as it indicates the location ROIs corresponding to the stage) by inputting a structural image of the subject into a segmentation model ([0046] the medical images are segmented using a segmentation model. [0048] the medical images may be CT image or MR images which are understood as a structural image) corresponding to the stage ([0043]-[0044], each stage among T-stage, N-stage, and M-stage are segmented. Therefore, a model performing that segmentation is understood as a model corresponding to the associated stage for each stage the model performs segmentation on), wherein the segmentation model is used for segmenting the one or more ROIs corresponding to the stage ([0046] the model segments each area including the primary tumor, regional lymph nodes, and organs at risk which are understood as according to the type of the ROI corresponding to the stage); generating a second distribution image indicating the distribution of the one or more ROIs corresponding to the stage in the subject by processing a functional image based on the first distribution image ([0048] the input medical image may include functional-structural images such as PET-CT and PET-MR. [0045] and Fig. 1, as the output of the process is TNM staging, if a functional image were input into the process it would be understood that the generation of a distribution image would be performed while considering each stage as it would with a CT or MR image. When considered in combination with the generation of the second distribution image of Anand, it is understood that Liu and Anand disclose this limitation. Anand determines the second distribution image by processing the functional image based on the first distribution image and Liu performs segmentation on a functional-structural image, which may be understood as a distribution image, with the segmentation corresponding to the stage, see above. Therefore, in combination, Anand and Liu disclose the limitation); generating a lesion detection result of the subject based on the second distribution image corresponding to the stage ([0045], the system predicts the TNM staging which is understood as a lesion detection result. [0047]-[0048], the input image may include PET-CT images or PET-MR images, which are understood as functional images. In the embodiments using PET-CT image or PET-MR images, the use of the functional image may be understood as a second distribution image corresponding to the stage. When in combination with Anand, this is understood to disclose the "corresponding to the stage" language) Liu is combinable with Anand because it is from the same field of endeavor of evaluating tumor lesions (Anand, [0002]; Liu, [0001]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the analysis of each stage as taught by Liu with the invention of Anand. The motivation for doing so would have been that "It does not rely on physician assessment, avoids subjective assessment differences caused by experience differences, reduces the workload of physicians and the difficulty of staging, accelerates the diagnosis and treatment process for patients, provides physicians with quantitative staging results, and can assist physicians in their diagnostic work" (Liu, [0042]). Further, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the analysis corresponding to the stage for structural images and functional images of Liu with the invention of Anand. The motivation for doing so would have been that "the segmentation model receives three or more types of medical images {as listed in Liu [0048]}, thereby improving the segmentation accuracy of the segmentation model through multimodal medical images" (Liu, [0047]). Therefore, it would have been obvious to combine Liu with Anand. Anand in view of Liu does not disclose expressly to determine a type of one or more regions of interest corresponding to the stage and to display the lesion detection results of at least two stages of the plurality of stages. Rosenman discloses: determining a type of one or more regions of interest (ROIs) corresponding to the stage (the examiner understands "determining a type" to include determining a region most likely to contain lesions of the relevant stage, see the applicant's specification [0086]. Rosenman [0029] and Fig. 3, "tumor location by anatomy tool 302-F" may be understood as determining the type by determining regions relevant to the stage. Each stage is considered as follows: Fig. 1 and [0042], the locator tool is used for tumor location by anatomy, i.e. type, in step 106; Fig. 1 and [0067], the locator tool is used for N-stage location by anatomy, i.e. type, in step 108; Fig. 1 and [0078] and [0080], the locator tool is used for M-stage location by anatomy, i.e. type, in step 110); the lesion detection results of at least two stages of the plurality of stages in the functional image ([0084] displays each of the stages on a display device. As the language is inclusive, using the wording "and", it is understood to display results of all of the stages. [0031] the medical image may be a PET/CT image which is understood as a functional image. Therefore, a lesions detection result of at least two stages of the plurality of stages in a functional image are displayed. See also [0074] that lesions in a PET/CT image are annotated for evidence of displaying in a functional image), It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the determining the type of Rosenman with the invention of Anand in view of Liu. The motivation for doing so would have been "embodiments described herein ensure that 1) all patients are properly staged, limited only by the amount and quality of the input data, 2) staging inconsistencies between different physicians avoided, and 3) considerable amounts of time and effort for manually staging the patient are potentially saved" (Rosenman, [0027]). Therefore, it would have been obvious to combine Rosenman with Anand in view of Liu to obtain the invention as specified in claim 1. Regarding claim 7, Anand in view of Liu in further view of Rosenman discloses the subject matter of claim 1. Anand further discloses: wherein the generating a lesion detection result of the subject based on the second distribution image corresponding to the stage includes: obtaining a lesion detection model corresponding to the stage ([0090] "As shown in FIG. 2D, clinical data 266 can be used to train and test a module to perform a desired classification." Training a model is understood as obtaining a model); and generating the lesion detection result of the subject by performing, using the lesion detection model, lesion detection operation on the second distribution image of the one or more ROIs corresponding to the stage ([0090] "wherein prostate intensities 244a, 244b, 244c, 244d are used as input to a machine learning module 262 to perform a binary classification that assigns a cancer status of probably metastatic 264a or not 264b". Assigning a status is understood as generating a lesion detection result), wherein the lesion detection model is trained based on a training sample set ([0090] the model was trained on clinical data which is understood as a training sample set), the training sample set includes a plurality of sample distribution images of the one or more ROls corresponding to the stage ([0098] models were trained with images from patients which is understood as sample distribution images), and each sample distribution image of the plurality of sample distribution images includes at least one labeled abnormal point ([0098] the images had a known metastatic state which is understood as a label indicating the metastatic state, i.e. an abnormal point). Regarding claim 11, claim 11 recites a system with elements corresponding to the steps recited in claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim, claim 1. Additionally, the rationale and motivation to combine Anand in view of Liu in further view of Rosenman presented in rejection of claim 1 apply to this claim. Finally, Anand further discloses: A system for imaging processing, comprising: at least one storage medium including a set of instructions; and at least one processor in communication with the at least one storage medium, wherein when executing the set of instructions, the at least one processor is directed to cause the system ([0109] and Fig. 9, the computing device includes a processor, memory, and storage device. [0111] the storage device may store instructions that when executed by the processor perform the method), Regarding claim 17, claim 17 recites a system with elements corresponding to the steps recited in claim 7. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim, claim 7. Additionally, the rationale and motivation to combine Anand in view of Liu in further view of Rosenman presented in rejection of claim 7 apply to this claim. Regarding claim 20, claim 20 recites a system with elements corresponding to the steps recited in claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim, claim 1. Additionally, the rationale and motivation to combine Anand in view of Liu in further view of Rosenman presented in rejection of claim 1 apply to this claim. Finally, Anand discloses: A non-transitory computer readable medium, comprising executable instructions that, when executed by at least one processor, direct the at least one processor to perform a method ([0111] the system includes a storage device which stores instructions for executing the method. The memory may be one of a plurality of non-transitory computer readable medium embodiments), Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publ. No. 20210093249; hereafter, Anand) in view of Liu Yinglong (CN 114334132 A; hereafter, Liu) in further view of Rosenman et al. (US 20240054650 A1; hereafter, Rosenman) and of Yoshimasu et al. ("Fast Fourier transform analysis of pulmonary nodules on computed tomography images from patients with lung cancer" hereafter, Yoshimasu). Regarding claim 5, Anand in view of Liu in further view of Rosenman discloses the subject matter of claim 1. Anand in view of Liu in further view of Rosenman does not disclose expressly obtaining a lesion detection standard and generating the lesion detection result of the subject by performing a lesion detection operation. Yoshimasu discloses: wherein the generating a lesion detection result of the subject based on the second distribution image corresponding to the stage includes: obtaining a lesion detection standard corresponding to the stage (Pg. 4 col. 1 para. 2, a cut-off line is obtained to differentiate between two different classifications of lesions relating to stage. The cut-off line is understood as a standard corresponding to stage); and generating the lesion detection result of the subject by performing, based on the lesion detection standard, a lesion detection operation on the second distribution image of the one or more ROIs corresponding to the stage (Pg. 4 col. 1 para. 2 through pg. 4 col. 2 para 1, the cut-off line is used for lesion detection. "There were 69 tumors above and 3 patients below cut-off line in Group PL. There were 44 tumors below and 10 tumors above cut-off line in Group MT. Then this cut-off line provided a sensitivity of 95.8%, a specificity of 81.5%, and an accuracy of 89.7%." Pg. 2 section “Patients and Method”, details the procedure used to analyze the data which is understood as the lesion detection operation). Yoshimasu is combinable with Anand in view of Liu in further view of Rosenman because they are from the same field of endeavor of analysis of lesions or tumors (Yoshimasu, pg. 2 col. 1 para. 5, "In this study, we performed the quantitative analysis for the complexity of tumor outline of both primary lung cancer and metastatic lung tumor utilizing FFT analysis. And then we evaluated the usefulness and adequacy of our evaluation method"). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the lesion detection standard of Yoshimasu with the invention of Anand in view of Liu in further view of Rosenman. The motivation for doing so would have been that the standard finds the "highest sensitivity in each point of specificity" for lesion differentiation (Yoshimasu, pg. 4 col. 1 para. 2). Therefore, it would have been obvious to combine Yoshimasu with Anand in view of Liu in further view of Rosenman to obtain the invention as specified in claim 5. Regarding claim 6, Anand in view of Liu in further view of Rosenman and of Yoshimasu discloses the subject matter of claim 5. Anand in view of Liu in further view of Rosenman does not disclose expressly obtaining a reference image with at least one labeled lesion, for each reference image obtaining the frequency domain information, and determining the detection standard based on at least one labeled lesion and frequency domain information. Yoshimasu discloses: wherein the obtaining a lesion detection standard corresponding to the stage includes: obtaining at least one reference image of the one or more ROIs corresponding to the stage, each reference image of the at least one reference image including at least one labeled lesion (Pg. 2 col. 1 para. 6, "Sequential cases of 72 histologically proven primary lung cancers (Group PL) and 54 metastatic lung tumors (Group MT) were included in the study.” Pg. 2 col. 2 para. 2, “Chest CT was obtained as the thin-slice CT image. . . . The contour of each tumor was defined." The recording in the dataset of what kind of tumors are in each group and the definition of a contour around each tumor image is understood as labeling a reference image); for each reference image of the at least one reference image, obtaining frequency domain information of the reference image (Pg. 2 col. 2 para. 3, "The FFT analysis was performed using the wave analysis application (Igor Pro 6.03; WaveMetrics, Oregon, USA). Results of the FFT analysis were described as the spectrum of amplitude in each harmonics. Harmonics at the frequency of 2 to 359 times per cycle were calculated in each tumor (Fig. 1)."); and determining the lesion detection standard corresponding to the stage based on the at least one labeled lesion and the frequency domain information (Pg. 3 col. 2 para. 1 through pg. 4 col. 1 para. 1, " the complexity index (Cxi) that represents the complexity of tumor outline was defined as the sum of the amplitude of all harmonics." This shows dependence on the frequency domain information. Pg. 4 col. 1 para. 2, "Therefore, receiver operating characteristic (ROC) analysis for distinguishing primary lung cancers from metastatic lung tumors was performed adopting individual cut-off line that provided the highest sensitivity in each point of specificity." Therefore, the cut-off line, i.e. standard, was developed to distinguish or detect primary and metastatic lung cancers. The information of primary versus metastatic lung cancers is from the labels of the tumors in the data. Therefore, the standard is based on the labeled lesion and the frequency domain). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the lesion detection standard of Yoshimasu with the invention of Anand in view of Liu in further view of Rosenman. The motivation for doing so would have been that creating the standard this way enables the determination of a value that statistically differentiates between lesions corresponding to stage (Yoshimasu, pg. 3 col. 2 para. 1 through pg. 4 col. 1 para. 1, the Cxi value is a significant way to differentiate). Therefore, it would have been obvious to combine Yoshimasu with Anand in view of Liu in further view of Rosenman to obtain the invention as specified in claim 6. Regarding claim 15, claim 15 recites a system with elements corresponding to the steps recited in claim 5. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim, claim 5. Additionally, the rationale and motivation to combine Anand in view of Liu in further view of Rosenman and of Yoshimasu presented in rejection of claim 5 apply to this claim. Regarding claim 16, claim 16 recites a system with elements corresponding to the steps recited in claim 6. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim, claim 6. Additionally, the rationale and motivation to combine Anand in view of Liu in further view of Rosenman and of Yoshimasu presented in rejection of claim 6 apply to this claim. Claims 9-10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publ. No. 20210093249; hereafter, Anand) in view of Liu Yinglong (CN 114334132 A; hereafter, Liu) in further view of Rosenman et al. (US 20240054650 A1; hereafter, Rosenman) and of Sjöstrand et al. (U.S. Publ. No. 20190209116; hereafter, Sjostrand). Regarding claim 9, Anand in view of Liu in further view of Rosenman discloses the subject matter of claim 1. Anand in view of Liu in further view of Rosenman does not disclose expressly generating the lesion detection result based on the second distribution image by generating a preliminary lesion detection result and generating the lesion detection result by verifying the preliminary lesion detection result based on at least the first distribution image or the structural image. Sjostrand discloses: wherein the generating a lesion detection result of the subject based on the second distribution image corresponding to the stage further includes: generating a preliminary lesion detection result of the subject based on the second distribution image ([0291] "the GUI may display a graphical element that indicates a location of a voxel of the SPECT image corresponding to the identified prostate volume and having a maximal intensity in comparison with other voxels of the SPECT image that correspond to the identified prostate volume." Having a maximum intensity of a SPECT image is understood as having a high uptake value indicating the presence of a lesion. Therefore, this is understood as a preliminary lesion detection. Considering a functional image, such as SPECT, is understood as the second distribution image. Further, when in combination with Anand, the second distribution image was taught previously); and generating the lesion detection result by verifying the preliminary lesion detection result based on at least one of the first distribution image or the structural image ([0291] "The user may then visually verify, for example by inspection of the relation of the graphical element in comparison with the CT image, that this maximum SPECT intensity voxel indeed lies within the prostate of the subject." The user verifies the preliminary detection result to the structural image. Following the verification, [0292] "Returning to FIG. 25, in another step 2510, the user may choose to generate a report summarizing analysis performed for the patient". Generating the report is understood as generating the lesion detection result). Sjostrand is combinable with Anand in view of Liu in further view of Rosenman because it is from the same field of endeavor of assessing a disease state or stage (Sjostrand, [0012]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the preliminary lesion detection result verification of Sjostrand with the invention of Anand in view of Liu in further view of Rosenman. The motivation for doing so would have been that "In certain embodiments, the accurate identification of one or more such volumes are used to automatically determine quantitative metrics that represent uptake of radiopharmaceuticals in particular organs and/or tissue regions. These uptake metrics can be used to assess disease state in a subject, determine a prognosis for a subject, and/or determine efficacy of a treatment modality" (Sjostrand, [0011]). Therefore, it would have been obvious to combine Sjostrand with Anand in view of Liu in further view of Rosenman to obtain the invention as specified in claim 9. Regarding claim 10, Anand in view of Liu in further view of Rosenman discloses the subject matter of claim 1. Anand in view of Liu in further view of Rosenman does not disclose expressly displaying the lesion detection result on the first distribution image. Sjostrand discloses: wherein the method further includes: displaying the lesion detection result of the subject on the first distribution image (Fig. 29A, the lesion detection at the crosshairs is displayed to be "clinically significant". The display shows a structural image of a large portion of the body showing lesion regions and therefore is understood as a first distribution image). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the lesion detection result display of Sjostrand with the invention of Anand in view of Liu in further view of Rosenman. The motivation for doing so would have been "in order to aid in the user validation of the determined uptake metrics, a graphical element is displayed within the GUI to indicate a location of a voxel of the identified prostate volume" (Sjostrand, [0291]). Therefore, it would have been obvious to combine Sjostrand with Anand in view of Liu in further view of Rosenman to obtain the invention as specified in claim 10. Regarding claim 19, claim 19 recites a system with elements corresponding to the steps recited in claim 9. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim, claim 9. Additionally, the rationale and motivation to combine Anand in view of Liu in further view of Rosenman and of Sjostrand presented in rejection of claim 9 apply to this claim. Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publ. No. 20210093249; hereafter, Anand) in view of Liu Yinglong (CN 114334132 A; hereafter, Liu) in further view of Rosenman et al. (US 20240054650 A1; hereafter, Rosenman) and of Yoshimasu et al. ("Fast Fourier transform analysis of pulmonary nodules on computed tomography images from patients with lung cancer" hereafter, Yoshimasu) and of Declerck et al. (GB2472313, as contained in the IDS; hereafter, Declerck). Regarding claim 21, Anand in view of Liu in further view of Rosenman and of Yoshimasu discloses the subject matter of claim 5. Anand in view of Liu in further view of Rosenman and of Yoshimasu does not disclose expressly that a lesion detection standard relates to an SUV parameter and includes an SUV threshold, and different lesions include different SUV thresholds. Declerck discloses: wherein the lesion detection standard corresponding to each stage relates to an SUV parameter (Pg. 10 line 18-21, ROIs, VOIs in 3D, are identified in the image. Pg. 10 line 27-28, the system identifies the local maximum of the ROI selected by the user) and includes an SUV threshold (Pg. 11 line 7-10, the clinician can update the threshold which is understood as teaching that a threshold is used in detecting the lesion), and the lesion detection standards corresponding to different stages include different SUV thresholds (Pg. 11 line 7-10, the clinician can update the threshold which is understood as teaching different SUV thresholds). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the thresholds of Declerck with the invention of Anand in view of Shaykh in further view of Yoshimasu. The motivation for doing so would have been to segment the lesions to the satisfaction of the clinician (Declerck, pg. 11 line 4-10). Therefore, it would have been obvious to combine Declerck with Anand in view of Liu in further view of Rosenman and of Yoshimasu to obtain the invention as specified in claim 21. Regarding claim 22, claim 22 recites a system with elements corresponding to the steps recited in claim 21. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim, claim 21. Additionally, the rationale and motivation to combine Anand in view of Liu in further view of Rosenman and of Yoshimasu and of Declerck presented in rejection of claim 21 apply to this claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA B. CROCKETT/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Dec 30, 2022
Application Filed
Apr 07, 2025
Non-Final Rejection — §103
Jul 08, 2025
Response Filed
Sep 11, 2025
Final Rejection — §103
Nov 14, 2025
Response after Non-Final Action
Dec 16, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Feb 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592060
ARTIFICIAL INTELLIGENCE DEVICE AND 3D AGENCY GENERATING METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12587704
VIDEO DATA TRANSMISSION AND RECEPTION METHOD USING HIGH-SPEED INTERFACE, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12567150
EDITING PRESEGMENTED IMAGES AND VOLUMES USING DEEP LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561839
SYSTEMS AND METHODS FOR CALIBRATING IMAGE SENSORS OF A VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12529639
METHOD FOR ESTIMATING HYDROCARBON SATURATION OF A ROCK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+27.5%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month