Prosecution Insights
Last updated: April 19, 2026
Application No. 18/627,691

SYSTEMS AND METHODS FOR FACILITATING LESION INSPECTION AND ANALYSIS

Non-Final OA §102§103
Filed
Apr 05, 2024
Examiner
BEKELE, MEKONEN T
Art Unit
2699
Tech Center
2600 — Communications
Assignee
Exini Diagnostics AB
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
92%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
599 granted / 757 resolved
+17.1% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
780
Total Applications
across all art units

Statute-Specific Performance

§101
12.8%
-27.2% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
27.5%
-12.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 757 resolved cases

Office Action

§102 §103
Detailed Action 1. Claims 1-18,20-21,23-27, 75 and 79 are pending in this Application. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless - (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 3. Claims 1-9,11-12,18,20,27,75,and 79 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ICHINOSE A et al., (hereafter ICHINOSE ), WO 2022215530 A1, pub., 10/13/2022. For examination purpose US equivalent US 20240029252 A1 pub. 01/05/2024. As to claim 1, ICHINOSEA teaches method for interactive control of selection and/or analysis of hotspots detected within a medical image of a subject and representing potential lesions (Abstract, [0070], [0071], Fig.3 and 11; A medical image method executed by a processor provided in a medical image apparatus, the apparatus includes a display control unit 48A performs control to highlight a lesion having an attribute different from the attribute of the first lesion among the second lesions on the display 23.), the method comprising: (a) receiving and/or accessing, by a processor of a computing device, (i) an identification of a plurality of hotspots (Fig.7, [0050] The extraction unit 42 extracts a region including a lesion from the diagnosis target image acquired by the acquisition unit 40. Specifically, the extraction unit 42 extracts a region including a lesion using a trained model M1 for detecting the lesion from the diagnosis target image. The hotspots corresponds to the lesion). , and, (ii) a set of hotspot feature values comprising, for each particular hotspot of the plurality of detected hotspots, a corresponding value of at least one hotspot feature representing and/or indicative of a certainty or confidence in detection of the particular hotspot and/or a likelihood that the particular hotspot represents a true physical lesion within the subject (Figs.3, 11, [0050]-[0051], [0054], Specifically, the extraction unit 42 extracts a region including a lesion using a trained model M1 for detecting the lesion from the diagnosis target image.. The trained model M1 is, for example, a model trained by machine learning using, as training data, a large number of combinations of a medical image including a lesion and information specifying a region in the medical image in which the lesion is present. Thus, the ML trained model M1 outputs a feature value indicative of a certainty or confidence that indicate the detection result, where the detection result indicated whether the lesion is benign or malignant, and the presence or absence of an irregular margin.); (b) causing, by the processor, display of a graphical control element allowing for user selection of a subset of the plurality of detected hotspots via user adjustment of one or more displayed indicator widgets within the graphical control element from which values of one or more criteria are determined ( Figs.6,8-9, [0055], [0057], [0070], The display control unit 48 performs control to display information indicating the plurality of lesions (plurality of detected hotspots) extracted by the extraction unit 42 on the display 23. As shown in FIG. 9 as an example, a lesion having an attribute different from that of the lesion designated by the user is highlighted under the control by the display control unit 48A. FIG. 9 shows an example of highlighting in a case where one of the benign lesions in FIG. 8 (in the example of FIG. 9, the lesion pointed to by the arrow indicating the mouse pointer) is designated by the user.); (c) determining, by the processor, based on a user adjustment of the one or more displayed indicator widgets, user selected values of the one or more criteria( Figs.6,8-9, [0058], [0070], FIG. 6 shows an example of highlighting in a case where one of the lesions of the liver cyst in FIG. 5 (in the example of FIG. 6, the lesion pointed to by the arrow indicating the mouse pointer) is designated by the user. In this way, the user can easily ascertain the lesion having the same name as the lesion); (d) selecting, by the processor, a user-selected subset of the plurality of detected hotspots based on (i) the set of hotspot feature values and (ii) the user selected values of the one or more criteria (claim 9, [0015], [0058], acquiring a medical image, information indicating a plurality of regions of interest included in the medical image, and an attribute of each of the plurality of regions of interest; selecting at least one region of interest from among the plurality of regions of interest; and performing control to display information regarding a region of interest other than the selected region of interest based on an attribute of the selected region of interest, where the selections varied out by the user.); and (e) storing and/or providing, by the processor, for display and/or further processing, an identification of the user-selected subset (Fig.6, [0058]-[0059], n, the display control unit 48 may perform control to display the name of the lesion in an identifiable manner by setting the color of the frame line to a color preset according to the name of the lesion. Further, for example, the display control unit 48 may perform control to highlight the lesion by blinking the lesion, adding a predetermined mark, drawing an outer edge of the region of the lesion with a line, or the like. ). As to claim 2, ICHINOSEA teaches a particular one of the one or more criteria is a rank threshold whose value corresponds to a position on an ordered list, and the method comprises: at step (c), determining the value of the rank threshold ([0067, [0072] The analysis unit 44A analyzes each of the lesions extracted by the extraction unit 42, and derives whether the lesion is benign or malignant as an example of attributes of the lesion. Specifically, the analysis unit 44A derives whether the lesion is benign or malignant using a trained model M3 for deriving whether the lesion is benign or malignant. Clearly the classifier classify the lesion as benign or malignant by comparing the lesion against a predetermined threshold); and at step (d), ordering the plurality of hotspots according to their corresponding feature values in the set of hotspot feature values, thereby creating an ordered list of hotspots and selecting, as the user-selected subset, those hotspots having a position in the ordered list of hotspots above and/or below the value of the rank threshold ([0072], whether the lesion is benign or malignant using a trained model M3 for deriving whether the lesion is benign or malignant, the trained classifier groups benign images in one group and malignant image in different group, and the display unit display the two groups separately) As to claim 3, ICHINOSEA teaches the at least one hotspot feature is or comprises one or more of (i) to (iii) as follows:(i) a hotspot size that provides a measure of size of a particular hotspot, (ii) a hotspot intensity that provides a measure of intensity a particular hotspot, and (iii) an intensity-weighted hotspot size, providing a measure of both size and intensity of a particular hotspot([0054], FIG. 5 shows an example in which the name of five lesions is liver cyst and the name of one lesion is liver metastasis. Note that the attribute of the lesion is not limited to the name of the lesion, and may be, for example, findings such as a position, a size, the presence or absence of calcification, whether the lesion is benign or malignant, and the presence or absence of an irregular margin. Further, a plurality of attributes of the lesion may be used. It is also known that the lesion is benign or malignant determined in part based on the color intensity of the lesion and based on lesion shape irregularity (i.e. presence or absence of an irregular margin)). As to claim 4, ICHINOSEA teaches the at least one hotspot feature is or comprises a lesion classification that classifies a given hotspot according to a particular lesion labeling and classification scheme ([0054], FIG. 5 shows an example in which the name of five lesions is liver cyst and the name of one lesion (i.e. labeling the lesion) is liver metastasis. Note that the attribute of the lesion is not limited to the name of the lesion, and may be, for example, findings such as a position, a size, and whether the lesion is benign or malignant, i.e. classification the lesion) As to claim 5, ICHINOSEA teaches the at least one hotspot feature is or comprises a lesion location identifying an anatomical location of an underlying physical lesion that a given hotspot represents([0054] As shown in FIG. 5 as an example, the analysis unit 44 inputs, to the trained model M2, information specifying a diagnosis target image and a region in which the lesion extracted by the extraction unit 42 for the diagnosis target image is present. The trained model M2 outputs the name of the lesion included in the input diagnosis target image. FIG. 5 shows an example in which the name of five lesions is liver cyst and the name of one lesion is liver metastasis). As to claim 6, ICHINOSEA teaches the at least one hotspot feature is or comprises a likelihood value having been determined by the machine learning model upon and/or together with detection of a given hotspot and representing a likelihood, as determined by the machine learning model, that the given hotspot represents a true physical lesion within the subject (Fig.5, [0051], [0054], the extraction unit 42 extracts a region including a lesion using a trained model M1 for detecting the lesion from the diagnosis target image. A region including a lesion in a diagnosis target image is an example of a region of interest. The trained model M2 outputs the name of the lesion included in the input diagnosis target image. FIG. 5 shows an example in which the name of five lesions is liver cyst and the name of one lesion is liver metastasis. ..), As to claim 7, ICHINOSEA teaches the one or more criteria comprises one or more of (i)-(iii) as follows:(i) tumor/lesion type classification, (ii) a measure of hotspot intensity, and (iii) a measure of volume(Fig.5 [0054], these limitations are discussed in claim 3 and 4 above ) As to claim 8, ICHINOSEA teaches the one or more criteria comprise a hotspot likelihood threshold([0054], [0071], In a case where the number of lesions having the same attribute as the attribute of the first lesion selected by the selection unit 46 is equal to or greater than a threshold value Specifically, in a case where the attribute of the first lesion selected by the selection unit 46 is a benign lesion and the number of benign lesions is equal to or greater than the threshold value TH the display control unit 48A may perform control to highlight a malignant lesion on the display 23, ) As to claim 9, ICHINOSEA teaches causing, by the processor, graphical rendering of one or both of (i) the plurality of detected hotspots and/or (ii) the user-selected subset of the plurality of detected hotspots, wherein each hotspot is rendered as a graphical shape and/or outline thereof overlaid on the medical image (Figs.6, 8, [0058]-[0059], FIG. 6 shows an example of highlighting in a case where one of the lesions of the liver cyst in FIG. 5 (in the example of FIG. 6, the lesion pointed to by the arrow indicating the mouse pointer) is designated by the user. In this way, the user can easily ascertain the lesion having the same name as the lesion designated by the user as the creation target of the medical document. Accordingly, the user can easily create a comment on findings summarizing the findings of the lesions having the same name. In addition, the display control unit 48 may perform control to display the name of the lesion in an identifiable manner by setting the color of the frame line to a color preset according to the name of the lesion. Further, for example, the display control unit 48 may perform control to highlight the lesion by blinking the lesion, adding a predetermined mark, drawing an outer edge of the region of the lesion with a line, or the like.) As to claim 11, ICHINOSEA teaches causing, the processor, rendering, for each detected hotspot, a graphical outline demarcating a boundary of the hotspot overlaid on the medical image(Figs.5-6, 8, [0058]-[0059], Accordingly, the user can easily create a comment on findings summarizing the findings of the lesions having the same name. In addition, the display control unit 48 may perform control to display the name of the lesion in an identifiable manner by setting the color of the frame line to a color preset according to the name of the lesion. Further, for example, the display control unit 48 may perform control to highlight the lesion by blinking the lesion, adding a predetermined mark, drawing an outer edge of the region of the lesion with a line, or the like). As to claim 12, ICHINOSEA teaches receiving, by the processor, a user selection of a particular hanging protocol and causing, by the processor, display of the medical image according to the particular hanging protocol(Abstract, [007], Fig.4, In addition, in the medical image apparatus according to the aspect of the present disclosure, the processor may be configured to perform control to display information regarding a region of interest having the same attribute as the attribute of the selected region of interest. It is known that A hanging protocol is a set of predefined, automated rules in a PACS system that dictates how medical images (CT, MRI, X-ray) are displayed on a workstation) As to claim 18, ICHINOSEA teaches the medical image is or comprises a 3D functional image([0038], The imaging apparatus 2 may be, for example, a simple X-ray imaging apparatus, CT apparatus, MRI apparatus, PET apparatus, and the like. A medical image generated by the imaging apparatus 2 is transmitted to the image server 5 and is saved therein. It is known that the CT , MRI and PET apparatuses each are configured to capture a series of cross-sectional "slices" and reconstructing them into a volumetric (3D) mode) As to claim 20, ICHINOSEA teaches the medical image is or comprises a positron emission tomography (PET) image and/or a single photon emission computed tomography (SPECT) image obtained following administration of an agent to the subject ([0038], the imaging apparatus 2 may be, for example, a simple X-ray imaging apparatus, CT apparatus, MRI apparatus, PET apparatus, and the like. Clearly in medical imaging process it is a standard procedure to administer an agent to the subject before carrying out the image capturing process using CT , MRI and PET). As to claim 75, ICHINOSEA teaches A system for interactive control of selection and/or analysis of hotspots detected within a medical image of a subject and representing potential lesions, the system comprising: a processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor( Fig. 3, [0048], As shown in FIG. 3, the medical image apparatus 10 includes an acquisition unit 40, an extraction unit 42, an analysis unit 44, a selection unit 46, and a display control unit 48. The CPU 20 executes the medical image program 30 to function as the acquisition unit 40, the extraction unit 42, the analysis unit 44, the selection unit 46, and the display control unit 48); regarding then remaining limitation of claim 75, all the remaining limitations are similar to the limitation of claim 1 except claim 1 is directed to the method claim, thus the rejection applied to claim 1 also applied equally to the remaining limitations of claim 75. As to claim 79, ICHINOSEA teaches the plurality of hotspots have been detected within the medical image using a machine learning model(Fig.5, [0051], [0054], the extraction unit 42 extracts a region including a lesion using a trained machine learning model M1 for detecting the lesion from the diagnosis target image) As to claim 27, ICHINOSEA TECHES A method for interactive control of selection and/or analysis of regions of interest (ROIs) detected within a medical image of a subject and representing potential lesions (Fig.3, [0048], [0057], A medical image apparatus acquires a medical image includes the display control unit 48 performs control to display information indicating the plurality of lesions extracted by the extraction unit 42 on the display 23. The user designates a lesion for which a medical document such as an interpretation report is to be created from among the plurality of lesions displayed on the display 23), the method comprising:(a) receiving and/or accessing, by a processor of a computing device, (i) an identification of a plurality of ROIs having been detected within the medical image, and, (ii) a set of ROI feature values comprising, for each particular ROI of the plurality of detected ROIs, corresponding value(s) of at least one ROI feature ( Fig.5, [0050], [0053]-[0054] 0054] As shown in FIG. 5 as an example, the analysis unit 44 inputs, to the trained model M2, information specifying a diagnosis target image and a region in which the lesion extracted by the extraction unit 42 for the diagnosis target image is present. The trained model M2 outputs the name of the lesion included in the input diagnosis target image. FIG. 5 shows an example in which the name of five lesions is liver cyst and the name of one lesion is liver metastasis. Note that the attribute of the lesion is not limited to the name of the lesion, and may be, for example, findings such as a position, a size, the presence or absence of calcification, whether the lesion is benign or malignant, and the presence or absence of an irregular margin. Further, a plurality of attributes of the lesion may be used. The plurality of detected ROIs and one ROI feature corresponds to the detected the detected lesion, and the name of the name of detected lesion (i.e., benign or malignant) respectively) (b) causing, by the processor, display of a graphical control element allowing for user selection of a subset of the plurality of detected ROIs via user adjustment of one or more displayed indicator widgets within the graphical control element from which values of one or more user-selected criteria are received (Figs.6,8-9, [0055], [0057], [0070], The display control unit 48 performs control to display information indicating the plurality of lesions (plurality of detected hotspots) extracted by the extraction unit 42 on the display 23. As shown in FIG. 9 as an example, a lesion having an attribute different from that of the lesion designated by the user is highlighted under the control by the display control unit 48A. FIG. 9 shows an example of highlighting in a case where one of the benign lesions in FIG. 8 (in the example of FIG. 9, the lesion pointed to by the arrow indicating the mouse pointer) is designated by the user); (c) determining, by the processor, based on a user adjustment of the one or more displayed indicator widgets, user selected values of the one or more criteria (Figs.6,8-9, [0058], [0070], FIG. 6 shows an example of highlighting in a case where one of the lesions of the liver cyst in FIG. 5 (in the example of FIG. 6, the lesion pointed to by the arrow indicating the mouse pointer) is designated by the user. In this way, the user can easily ascertain the lesion having the same name as the lesion); (d) selecting, by the processor, a user-selected subset of the plurality of detected ROIs based on (i) the set of ROI feature values and (ii) the user selected values of the one or more criteria(claim 9, [0015], [0058], acquiring a medical image, information indicating a plurality of regions of interest included in the medical image, and an attribute of each of the plurality of regions of interest; selecting at least one region of interest from among the plurality of regions of interest; and performing control to display information regarding a region of interest other than the selected region of interest based on an attribute of the selected region of interest, where the selections varied out by the user); and (e) storing and/or providing, by the processor, for display and/or further processing, an identification of the user-selected subset(Fig.6, [0058]-[0059], n, the display control unit 48 may perform control to display the name of the lesion in an identifiable manner by setting the color of the frame line to a color preset according to the name of the lesion. Further, for example, the display control unit 48 may perform control to highlight the lesion by blinking the lesion, adding a predetermined mark, drawing an outer edge of the region of the lesion with a line, or the like.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 21and 23-26 are rejected under 35 U.S.C. 103 as being unpatentable over ICHINOSE, US 20240029252 A1, in view of Farwell et al., (hereafter Farwell), “PET/CT Imaging in Cancer: Current Applications and Future Directions”, Article Review Published online June 19, 2014, in Wiley Online Library. Regarding claim 21, while ICHINOSE teaches the limitation of claim 1, but fails to teach the limitation claim 21. On the other hand Farwell teaches the agent comprises a PSMA binding agent(page 3441 right col., 3rd par., Prostate-specific membrane antigen (PSMA). Specifically 68Ga-PSMAhave been studied in humans, and both reportedly were capable of detecting sites of metastatic prostate) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a well- known procedure of administrating 68Ga-PSMA to a subject that it acts as a targeted molecular agent that attaches to cancer cells taught by Farwell into ICHINOSE. The suggestion/motivation for doing so would have been allowing a PET scanner to visualize tumors, even very small ones, throughout a body of the subject. As to claim 23, Farwell teaches the agent comprises 18F (Table 1). As to claim 24, Farwell teaches the agent is or comprises [18F]DCFPyL(page 3441 right col., 3rd par.,) As to claim 25, Farwell teaches the agent is or comprises PSMA-11 (page 3441 right col., 3rd par., it is well-known that essentially, PSMA is the target, and PSMA-11 is a "key" designed to unlock that target for imaging) As to claim 26, Farwell the agent comprises one or more members selected from the group consisting of 99mTc, 68Ga, 177Lu,225 Ac, 111Tn, 123j124I (Table 1) 5. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over ICHINOSE, US 20240029252, in view of YAMAGATA, HITOSH(hereafter YAMAGATA), CN101739708A, pub. 11/20/2009. Regarding claim 10, ICHINOSE teaches causing, by the processor, rendering a plurality of graphical shapes as overlaid on the medical image, each of the plurality of graphical shapes corresponding to and demarking a detected hotspot ([0054], FIG. 5 shows an example in which the name of five lesions is liver cyst and the name of one lesion is liver metastasis. Note that the attribute of the lesion is not limited to the name of the lesion, and may be, for example, findings such as a position, a size, the presence or absence of calcification, whether the lesion is benign or malignant, and the presence or absence of an irregular margin.), but fails to teach “having a solid, partially transparent, fill; receiving, by the processor, via a user interaction with an opacity setting graphical widget, an opacity value; and updating, by the processor, an opacity of the solid fill and/or boundary of the graphical shapes according to the user-selected opacity value” On the other hand, YAMAGATA teaches having a solid, partially transparent, fill; receiving, by the processor, via a user interaction with an opacity setting graphical widget, an opacity value; and updating, by the processor, an opacity of the solid fill and/or boundary of the graphical shapes according to the user-selected opacity value(Claims 5-6, the medical image processing device according to claim1, wherein the opacity setting part to add in the medical image data of attached information of the window level value and the window width value as the opacity curve set of the Fieldbus process control parameter, wherein the opacity setting part according to the MPR image or tomography imaging for the window level value and the window width value time window adjusting, setting the opacity curve). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a well- known established techniques of adjusting an opacity curve of a medical image taught by YAMAGATA into ICHINOSE. The suggestion/motivation for doing so would have been doing so would allow user of ICHINOSE to transform raw 3D scalar data (like CT or MRI) into meaningful 3D visualizations and manipulate the visibility of tissues based on their intensity values (e.g., Hounsfield units). 6. Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over ICHINOSE, US 20240029252, in view of Sjöstrand et al. (hereafter Sjöstrand),US 20200337658 A1, pub.10/29/2020. As claim 13, ICHINOSE teaches the set of hotspot feature values comprises, for each particular hotspot of the plurality of detected hotspot, located (claim 9, [0015], [0058], acquiring a medical image, information indicating a plurality of regions of interest included in the medical image, and an attribute of each of the plurality of regions of interest; selecting at least one region of interest from among the plurality of regions of interest; and performing control to display information regarding a region of interest other than the selected region of interest based on an attribute of the selected region of interest, where the selections varied out by the user); However, it is noted that ICHINOSE does not teaches “ the one or more user-selected criteria comprise a lesion location assignment criteria ((ii) a size of a particular skeletal region to which the particular hotspot is assigned based on its location in the annotated set of images, thereby determining one or more skeletal involvement factors; adjusting the skeletal involvement factors using one or more region-dependent correction factors, thereby obtaining one or more adjusted skeletal involvement factors); and the method comprises: at step (c), determining, via the user interaction with the one or more displayed indicator widgets, as the value of the lesion location assignment criteria, an unassigned hotspots value; at step (d), selecting, by the as the user-selected subset, all unassigned hotspots; causing, by the processor, graphical rendering and display of the user-selected subset; and for each of particular hotspot of at least a portion of the user-selected subset: receiving, by the processor, a user input of a location assignment for the particular hotspot; and updating the lesion location assignment for the particular hotspot with the user input location assignment.” On the other hand Sjöstrand teaches the one or more user-selected criteria comprise a lesion location assignment criteria (Claim 25, a size of a particular skeletal region to which the particular hotspot is assigned based on its location in the annotated set of images, thereby determining one or more skeletal involvement factors; adjusting the skeletal involvement factors using one or more region-dependent correction factors, thereby obtaining one or more adjusted skeletal involvement factors); and the method comprises: at step (c), determining, via the user interaction with the one or more displayed indicator widgets, as the value of the lesion location assignment criteria, an unassigned hotspots value ([0068], [0129], [0036]thereby computing an area fraction for the particular hotspot; and scaling (e.g., multiplying) the area fraction by a density coefficient associated with the skeletal region of interest to which the particular hotspot is assigned [e.g., that accounts for weight and/or density of bond in the corresponding skeletal region of interest (e.g., wherein the density coefficient is a weight fraction of the corresponding skeletal region of interest with respect to a total skeleton (e.g., of an average human)], thereby computing the skeletal involvement factor for the particular hotspot.); at step (d), selecting, by the as the user-selected subset, all unassigned hotspots; causing, by the processor, graphical rendering and display of the user-selected subset ([0074], selecting, by the processor, a first subset (e.g., up to all) of the initial set of hotspots based at least in part on the metastasis likelihood values [e.g., determining whether or not to include a particular hotspot of the initial set of hotspots in the subset based on the metastasis likelihood value calculated for that particular hotspot exceeding a threshold value) ; and for each of particular hotspot of at least a portion of the user-selected subset: receiving, by the processor, a user input of a location assignment for the particular hotspot (([0068],[0190],[0036] thereby computing an area fraction for the particular hotspot; and scaling (e.g., multiplying) the area fraction by a density coefficient associated with the skeletal region of interest to which the particular hotspot is assigned); and updating the lesion location assignment for the particular hotspot with the user input location assignment([0190] FIG. 17B is a screenshot of a GUI showing identified hotspots and illustrating an updated BSI value following exclusion of an automatically detected hotspot based on user input via the GUI, according to an illustrative embodiment). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate AI-driven diagnostic tools that calculating a metastasis likelihood value corresponding to a likelihood of the hotspot image taught by Sjöstrand into ICHINOSE. The suggestion/motivation for doing so would have been doing so would allow user of ICHINOSE to assigning a probability score (often 0 to 1) to detected, suspicious areas, enabling automated, quantitative, and prognostic analyses accurately. As claim 14, Sjöstrand teaches receiving, by the processor, a user selection of a particular hotspot of the plurality of detected hotspots; receiving, by the processor a user selection of one or more voxels of the medical image to add to, and/or subtract from the particular hotspot; and updating the particular hotspot to incorporate and/or exclude the one or more user- selected voxels ([0036], [0327], For example, a machine learning module may receive as input a 3D image of a subject (e.g., a CT image; e.g., an MM), and for each voxel of the image, determine a value that represents a likelihood that the voxel lies within a region of the 3D image that corresponds to a representation of a particular organ or tissue of the subject. automatically detecting, by the processor, an initial set of one or more hotspots, each hotspot corresponding to an area of elevated intensity in the annotated set of image of an organ or tissue). Allowable Subject Matter 7. Claims15-17 are objected to as being dependent upon a rejected base claims but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claim. 8. Regarding independent claim 15 no prior art is found to anticipate or render the following limitation obvious: “receiving, by the processor, one or more user-identified points within the medical image, each of the one or more user-identified points corresponding to a location of a user single-click within the GUI; for each particular one of the one or more user-identified points within the medical image, segmenting, by the processor, the medical image to delineate a 3D volume of corresponding user-specified hotspot using (i) the particular user-identified point and (ii) intensities of voxels of the medical image about the particular user-identified point, thereby determining one or more user-specified hotspots, each associated with and segmented via a user single-click; and updating, by the processor, the initial set of hotspots to include the plurality of user- specified hotspots.” 9. Claims16-17 are objected since they are depending on the objected claim 15 Prior art of record but not applied in the rejection “ SYSTEMS AND METHODS FOR ARTIFICIAL INTELLIGENCE-BASED IMAGE ANALYSIS FOR DETECTION AND CHARACTERIZATION OF LESIONS” US 20220005586 A1, pub. 01/06/2022, to Brynolfsson et al., disclosed: In another step 526, a user validates automatically detected hotspots and/or identifies additional hotspots, e.g., to create a final set of hotspots corresponding to lesions, for inclusion in a generated report. As shown in FIG. 6C, a user may select an automatically identified hotspot by hovering over a graphical representation of the hotspot displayed within the GUI (e.g., as an overlay and/or marked region on a PET and/or CT image). To facilitate hotspot selection, the particular hotspot selected may be indicated to the user, via a color change (e.g., turning green). The user may then click on the hotspot to select it, which may be visually confirmed to the user via another color change. For example, as shown in FIG. 4C, upon selection the hotspot turns pink. Upon user selection, quantitatively determined values, such as a lesion index and/or anatomical labeling may be displayed to the user, allowing them to verify the automatically determined values 528 (see [0179]) In certain embodiments, the GUI allows a user to select hotspots from the set of (automatically) pre-identified hotspots to confirm they indeed represent lesions 526a and also to identify additional hotspots 562b corresponding to lesions, not having been automatically detected(see [0180]). The method of claim 24, comprising: (i) receiving, by the processor, via the GUI, a user selection of a subset of the one or more hotspots confirmed via user review as likely to represent underlying cancerous lesions within the subject(see claim 1) Contact Information Any inquiry concerning this communication or earlier communication from the examiner should be directed to Mekonen Bekele whose telephone number is (469) 295-9077.The examiner can normally be reached on Monday -Friday from 9:00AM to 6:50 PM Eastern Time. If attempt to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Eng, George can be reached on (571) 272-7495.The fax phone number for the organization where the application or proceeding is assigned is 571-237-8300. Information regarding the status of an application may be obtained from the patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished application is available through Privet PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have question on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217-919 (tool-free) /MEKONEN T BEKELE/Primary Examiner, Art Unit 2699
Read full office action

Prosecution Timeline

Apr 05, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602744
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602897
FACE DETECTION BASED FILTERING FOR IMAGE PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12586244
COMPOSITE IMAGE CAPTURE WITH TWO DEGREES OF FREEDOM CAMERA CAPTURING OVERLAPPING IMAGE FRAMES
2y 5m to grant Granted Mar 24, 2026
Patent 12561941
Video Shooting Method and Electronic Device
2y 5m to grant Granted Feb 24, 2026
Patent 12561761
PROGRESSIVE REFINEMENT VIDEO ENHANCEMENT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
92%
With Interview (+13.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 757 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month