Prosecution Insights
Last updated: April 19, 2026
Application No. 17/632,007

SYSTEMS, METHODS AND APPARATUSES FOR VISUALIZATION OF IMAGING DATA

Non-Final OA §103
Filed
Feb 01, 2022
Examiner
NASHER, AHMED ABDULLALIM-M
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Perimeter Medical Imaging Inc.
OA Round
5 (Non-Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
80 granted / 99 resolved
+18.8% vs TC avg
Strong +34% interview lift
Without
With
+34.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
63.1%
+23.1% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/11/2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2, 4-14, 17-18 and 20-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Molin (US 20180089496 A1), in view of Hall (US 20150279026 A1), in view of Yang (US 20200402215 A1), and further in view of Avanaki (US 20200359887 A1). Regarding claims 1 and 18, Molin discloses a display configured to display a user interface (fig. 7, ref 10 (display)); a memory (fig. 14, ref 136 (memory)); and a processor operatively coupled to the display and the memory, the processor configured to (fig. 14, ref 148): PNG media_image1.png 457 397 media_image1.png Greyscale receive image data of a tissue sample (fig. 11, ref 400); PNG media_image2.png 555 370 media_image2.png Greyscale identify, using a machine learning algorithm, a set of patches of the image data ("[0095] The term “positivity” and derivatives thereof refers to a probability of a positive association with a clinical disease state such as cancer, for example, that can be identified by a stained color of a nuclei. The typical case is for cancer detection, diagnosis or evaluation, where nuclei relevant for disease assessment have been given a distinct staining, often a brown color. [0106] FIG. 9 illustrates that the circuit 10c can calculate probability using a machine learning system 66 that can evaluate measured properties of respective patches 25 input to the machine learning system 66, such as shape, intensity, volume, area, color and the like and output a calculated probability including, for example, a nucleus probability (a probability that the detected nuclei is actually a nuclei) and a positivity probability (a probability that the detected nuclei is positively stained)."), each patch from the set of patches including at least one feature that is identified by the machine learning algorithm as being a suspected abnormality ([0104] Again, each randomly selected patch from this class (block 214) can be sorted by the same defined common sorting criteria, shown as by nucleus and positivity probability (block 234). Sorted patches from each of the three different classes can then be presented in at least one patch gallery in a sorted order (block 240) of probability of positivity. [0105] This sorting procedure is representative of a positive count evaluation of cells but other sorting procedures can be used for other evaluations of a patch gallery content, such as mitosis as shown in FIGS. 5A and 5B where there is no particular order but detected mitoses can be shown side by side in the patches displayed to exclude false positives. [0181] A machine learning algorithm 66 can be used to divide a plurality of patches 25 into a patch gallery 25g. The patches can be divided into three groups, Positive, Negative and Excluded, similar to the division of nuclei in FIG. 6, which as discussed above is divided into Positive, Negative and Non-tumor. Non-tumor is the same as Excluded. Dividing the patches 25 into three groups is the same as classifying into the three groups and/or nominally sorting the patches along a cell type feature, which is a compound feature 25c as in FIG. 9. PNG media_image3.png 304 722 media_image3.png Greyscale [0189] FIG. 17 is a flow chart of actions/operations that can occur to reclassify patches 25 based on user-input and can optionally retrain a machine learning model for the classification according to embodiments of the present invention.); generate a consolidated view of patches selected from the set of patches identified by the machine learning algorithm, such that each patch in the consolidated view includes at least one feature that is a suspected abnormality ([0104] The patches 25 can be electronically divided and/or pre-sorted into a plurality of different primary classes based on the presence of a cell nucleus and a positivity probability and may exclude nuclei outside a visible span (block 200). [0105] This sorting procedure is representative of a positive count evaluation of cells but other sorting procedures can be used for other evaluations of a patch gallery content, such as mitosis as shown in FIGS. 5A and 5B where there is no particular order but detected mitoses can be shown side by side in the patches displayed to exclude false positives. [0106] FIG. 9 illustrates that the circuit 10c can calculate probability using a machine learning system 66 that can evaluate measured properties of respective patches 25 input to the machine learning system 66, such as shape, intensity, volume, area, color and the like and output a calculated probability including, for example, a nucleus probability (a probability that the detected nuclei is actually a nuclei) and a positivity probability (a probability that the detected nuclei is positively stained).)), the consolidated view being arranged according to a predefined layout ([0040] The at least one panel of the patch gallery can be arranged in an order from left to right and/or up to down, with decreasing probability values of positivity and/or cell nuclei decreasing to the right and/or down.); and display the consolidated view of the set of patches on the user interface (fig. 4a). PNG media_image4.png 371 534 media_image4.png Greyscale Molin does not disclose wherein each patch from the set of patches is associated with a spatial location in the tissue sample and the processor is configured to display markings in the consolidated view to associate patches from the set of patches that have proximate spatial locations with one another and link portions of the consolidated view to portions of the image data to switch between the consolidated view and a portion of the image data including a patch included in the consolidated view. In a similar field of endeavor of medical microscopy imaging, Hall teaches wherein each patch from the set of patches is associated with a spatial location in the tissue sample ([0077] Thus, for example, different tissue samples from different cut locations can have unique electronic identifiers that correlate a tissue sample Ts with a physical cut location 10 and/or the corresponding virtual cut location 110.) and the processor is configured to display markings in the consolidated view to associate patches from the set of patches that have proximate spatial locations with one another ("Fig. 4 [0077] The circuit 30 can be configured to correlate virtual cut mark locations 110 on the macroscopic map M (FIG. 4) to the tissue samples Ts taken from the respective cut locations 10 on the specimen G. Thus, for example, different tissue samples from different cut locations can have unique electronic identifiers that correlate a tissue sample Ts with a physical cut location 10 and/or the corresponding virtual cut location 110.") and link portions of the consolidated view to portions of the image data to switch between the consolidated view and a portion of the image data including a patch included in the consolidated view ([0081] The cut location marks 110 on a respective macroscopic image or model M may be configured as active or inactive objects or links. For active objects or links, a user may select (e.g., click or touch) a particular cut location mark 110 on image or model M on the display 35 and the viewer V can automatically present WSI slides S associated with the selected cut location mark. The resulting presentation on the display 35 can be to provide the relevant slides S in a new larger viewing window, in a concurrent adjacent or overlay window and/or along one side of the macroscopic image M. A set of thumbnail WSI images 200 of different slides S may be concurrently displayed along a side, top or bottom of the screen. The slide Sa being currently viewed by the viewer V can be highlighted, shown as color coded perimeter matching a color of the visually enhanced cut 110a (cut A). The set of slides 200 may be displayed via a pull down option, for example, or in subsets of slides, relative to a particular block B or a particular cut location, for example. [0083] Still referring to FIG. 4, the viewer V may optionally also be configured to provide a reference window Wref which shows an overview of the entire currently viewed slide Sa. [0108] FIGS. 8 and 10 illustrate an optional use of tether lines 88 that connect a cut location 110 on the macroscopic image M to a thumbnail slide image T and/or the larger displayed slide sample S. The visual tethers may be faded, dimmed or selected via a UI by a user. Typically, the tethers 88 are not required and may be used via a selection by a user or may be omitted as an option from the viewer V. Where used, the tethers may be just visual tethers or may be active links that allow for more information to be provided to a user for a particular cut location and/or slide. [0109] FIGS. 9 and 10 illustrate that a viewer V can show two large views of slides S.sub.1, S.sub.2 in high magnification and the corresponding cut location marks 110a (“A” and “E”) can be visually enhanced relative to the other cut mark locations. The cut location marks 110a that are related to the current large slide views S.sub.1, S.sub.2 can be shown as bold, with increased intensity and/or in solid line and/or with a different color relative to the other cut location marks 110 which can also be shown in broken lines. Here, each relevant cut mark location 110a is shown in a brighter, increased intensity and in a different solid line color on the image M.). PNG media_image5.png 607 497 media_image5.png Greyscale (in fig 9, t1 and t2 are different views of the different portions of a cell. They are in a consolidated view (portions of images stitched together in a row). T1 is shown as s1 (can be seen as a patch of the reference image with the red arrows) while t2 is shown as s2.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images to reduce time and for better accuracy ([0006]). Molin and Hall do explicitly not disclose or teach and wherein the machine learning algorithm is trained on previously labeled volumetric imaging data to detect regions of diagnostic relevance, wherein said algorithm distinguishes between benign and malignant tissue features. Molin implicitly discloses wherein the machine learning algorithm is trained on previously labeled volumetric imaging data to detect regions of diagnostic relevance, wherein said machine learning algorithm distinguishes between benign and malignant tissue features ([0189] FIG. 17 is a flow chart of actions/operations that can occur to reclassify patches 25 based on user-input and can optionally retrain a machine learning model for the classification according to embodiments of the present invention.). However, in a similar field of endeavor of endeavor class-aware adversarial pulmonary nodule synthesis, Yang explicitly teaches, in better detail, and wherein the machine learning algorithm is trained on previously labeled volumetric imaging data to detect regions of diagnostic relevance ([0023] FIG. 1 shows an exemplary computed tomography (CT) medical image 100 of a patient showing pulmonary nodule 102. Pulmonary nodule 102 in image 100 is a malignant pulmonary nodule. Image 100 may be part of a training dataset for training a machine learning network for performing an image analysis task such as, e.g., classifying a nodule as benign or malignant.), wherein said machine learning algorithm distinguishes between benign and malignant tissue features ([0038] At step 404, the target nodule is classified using a trained machine learning network trained based on a synthesized medical image patch. The target nodule may be classified as one of malignant or benign. In one embodiment, the trained machine learning network is a deep 3D CNN pre-trained for natural video classification, however any suitable machine learning network may be employed. In one embodiment, the synthesized medical image patch is generated according to the steps of method 200 of FIG. 2.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin and Hall’s disclosure of spatial location of cell patches, with Yang’s teaching of benign and malignant classification, by using a synthesized medical image patch that includes the unmasked portion of the initial medical image patch and a synthesized nodule replacing the masked portion of the initial medical image patch in order to reduce processing time (abstract and [0003]). Molin, Hall and Yang do not disclose optical coherence tomography B-scan. In a similar field of endeavor of identifying, detecting and/or diagnosing cancer using OCT features, Avanaki teaches receive optical coherence tomography B-scan image data (B-scan image data) of a tissue sample ([0089] A region of interest is specified in an OCT B-scan image. …. By repetition for several regions of interest (ROIs), which are averaged, and standard deviations calculated, optical radiomic features can be derived for that tissue: mean and standard deviation of scattering and absorption coefficients, and anisotropy factor. [0094] Different ways of choosing the ROI may be investigated on optical coherence tomography (OCT) images of milk phantoms: (1) a median filter may be initially applied on a stack of 170 OCT images acquired from the same cross section, the extracted optical properties may be averaged over several ROIs chosen in the resultant image). It would have been obvious to one of ordinary skill in the art before the effective filing date of this invention to combine the known method of benign or malignant tissue determination using a computed tomography medical image of Yang’s disclosure with the known techniques of optical coherence tomography B-scan images as taught by Avanaki, in order to yield predictable results a non-invasive, no contact way of imaging a tissue to generate cross-sectional, ultra-high resolution images, which results in an improved system. Regarding claim 2, Molin discloses wherein the image data includes a set of two-dimensional scans of the tissue sample ([0026] The obtained WSI image can be a two-dimensional (2-D) WSI having between about 1?10.sup.6 pixels to about 1?10.sup.12 pixels.), the processor further configured to process the set of two-dimensional scans of the tissue sample into a plurality of patches each having the same predefined dimensions ([0101] The patches 25 in a respective patch gallery 25g view can have the same area (height by width dimensions) and each patch 25 can correspond to the same size area of the WSI image.), each patch from the plurality of patches being a different portion of a two-dimensional scan from the set of two-dimensional scans ([0099] In a respective patch gallery 25g, the patches 25 can be associated with different areas in an ROI 15r. The ROI 15r can be many times (10?-100? or more) larger in size than the patches 25 of the WSI image 15.), the set of patches being patches from the plurality of patches that include at least one feature associated with an abnormality ("[0039] The electronically classifying the patches can be carried out to classify the patches as either (i) comprising positive nuclei, (ii) comprising negative nuclei, or (iii) comprising non-nuclei. [0104] The number of samples N can be the same or different for each of the positive and negative patch classes."). Regarding claim 4, Molin does not disclose but Hall teaches wherein the processor is configured to display (1) a first marking having a first color proximate to each patch from a first subset of the set of patches ([0082] Thus, each mark 110 can be shown with a defined color that corresponds to the slide color indicia, e.g., each mark 110 and its associated slides S can be shown all in green, red, fuchsia, pink, yellow and the like.) and (2) a second marking having a second color different from the first color proximate to each patch from a second subset of the set of patches ([0082] Thus, each mark 110 can be shown with a defined color that corresponds to the slide color indicia, e.g., each mark 110 and its associated slides S can be shown all in green, red, fuchsia, pink, yellow and the like.), each patch from the first subset of patches having a spatial location that is adjacent to at least one other patch from the first subset of patches, and each patch from the second subset of patches having a spatial location that is adjacent to at least one other patch from the second subset of patches (fig. 8, ref m, ref 88 and [0082] For example, all slides from the same cut location, typically some with different stains, may be grouped together, e.g., adjacent each other, shown only in the thumbnail set 200 (with other slides from other cut locations omitted) and/or concurrently shown with all slides but emphasized with a common color background and/or perimeter, with an adjacent icon or with an overlay.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Regarding claims 5 and 21, Molin discloses wherein the processor is configured to display the consolidated view of the set of patches in a first area of the user interface (fig. 1, ref 25g (set of patches)), the processor further configured to display a perspective view of the tissue sample in a second area of the user interface such that the perspective view of the tissue sample is displayed together with at least a portion of the consolidated view of the set of patches (fig. 1, ref 16 (second area), ref 25g (set of patches)). Regarding claims 6 and 22, Molin does not disclose but Hall teaches wherein the processor is further configured to display a two-dimensional view of the tissue sample at a predefined depth in a third area of the user interface such that the two-dimensional view of the tissue sample is displayed together with the perspective view and at least a portion of the consolidated view of the set of patches (fig 4, ref wref (full view of current slide), ref 5 (view of small area of wref), ref 200 (views of many small areas in ref 5)). PNG media_image6.png 519 411 media_image6.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Regarding claims 7 and 23, Molin does not disclose but Hall teaches wherein each patch from the set of patches is associated with a spatial location in the tissue sample (fig. 8, ref m, ref 88 and [0082] For example, all slides from the same cut location, typically some with different stains, may be grouped together, e.g., adjacent each other, shown only in the thumbnail set 200 (with other slides from other cut locations omitted) and/or concurrently shown with all slides but emphasized with a common color background and/or perimeter, with an adjacent icon or with an overlay.), the processor further configured to display a first marking proximate to a patch from the set of patches in the consolidated view and a second marking in at least one of the perspective view or the two-dimensional view that is indicative of the spatial location of the patch (fig. 8, ref m, ref 88 and [0082] For example, all slides from the same cut location, typically some with different stains, may be grouped together, e.g., adjacent each other, shown only in the thumbnail set 200 (with other slides from other cut locations omitted) and/or concurrently shown with all slides but emphasized with a common color background and/or perimeter, with an adjacent icon or with an overlay.), the first and second markings sharing a common characteristic (fig. 8, ref m, ref 88 and [0082] For example, all slides from the same cut location, typically some with different stains, may be grouped together, e.g., adjacent each other, shown only in the thumbnail set 200 (with other slides from other cut locations omitted) and/or concurrently shown with all slides but emphasized with a common color background and/or perimeter, with an adjacent icon or with an overlay.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Regarding claim 8, Molin does not disclose but Hall teaches wherein the first and second markings have the same color ([0082] In some embodiments, each slide S associated with a respective cut location mark 110 can be color-coded to the mark of that location, e.g., the object overlay line can have a color that is the same as that of the slide background and/or perimeter border, for example.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Regarding claim 9, Molin discloses wherein the processor is configured to, in response to receiving an input indicating a different predefined depth, display a two-dimensional view of the tissue sample at the different predefined depth ([0101] The patches 25 in a respective patch gallery 25g view can have the same area (height by width dimensions) and each patch 25 can correspond to the same size area of the WSI image. Alternatively, one or more of the patches 25 can have a larger or smaller area in the patch gallery view on the display 10 than one or more others.). Regarding claim 10, Molin discloses wherein the processor is configured to, in response to detecting a selection of a patch from the set of patches, display a two-dimensional scan of the tissue sample that includes the patch selected from the set of patches ([0193] table 1, click on patch, navigation to location of patch in high magnification). Regarding claim 11, Molin does not disclose but Hall teaches wherein the processor is further configured to display a marking proximate to a portion of the two-dimensional scan that corresponds to a location of the patch selected from the set of patches ([0079] The viewer V may be configured to show only the relevant cut mark 110a and omit or greatly reduce the intensity and/or visual prominence of the non-relevant cut marks (the cut marks not associated with the current slide Sa on the display 35).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Regarding claims 12 and 17, Molin discloses wherein the processor is further configured to, in response to receiving an input indicating that a patch from the set of patches is not associated with an abnormality ([0115] Adjustment to move the lines 34 to form a different border in the patch gallery 25g allows a user to adjust the inclusion parameters applied by an automated circuit and can automatically remove the positive count of nuclei in patches associated with/from the sub-segment of the ROI 18 as non-relevant patches 25, which in the gallery 25g, reside closer to the “non-tumor” and “negative” sides of the patch gallery 25g.): remove the patch from the set of patches ([0115] Adjustment to move the lines 34 to form a different border in the patch gallery 25g allows a user to adjust the inclusion parameters applied by an automated circuit and can automatically remove the positive count of nuclei in patches associated with/from the sub-segment of the ROI 18 as non-relevant patches 25, which in the gallery 25g, reside closer to the “non-tumor” and “negative” sides of the patch gallery 25g.); and after removing the patch from the set of patches, generate an updated consolidated view of the set of patches ([0115] The view and/or count in a normal image view 15 on the display 10 can be updated automatically based on this user adjustment, typically after accepting the adjustment via user interface 30i.). Regarding claim 13, Molin does not disclose but Hall teaches the processor operatively coupled to the imaging device ([0014] Other embodiments are directed to a grossing workstation. The workstation includes: a cut mark location identification circuit; at least one camera over a workspace at the workstation in communication with the cut mark location identification circuit; and a cutting instrument in communication with the cut mark location identification circuit.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Regarding claim 14, Molin discloses a memory (fig. 14, ref 136 (memory)); and a processor operatively coupled to the memory, the processor configured to (fig. 14, ref 148): receive a set of two-dimensional image scans of a three-dimensional tissue sample ([0027] The WSI image can be a three-dimensional (3-D) WSI with a z extent that has a plurality of slices across a depth of a tissue section with less pixels in the z extent relative to x and y extents.); process the set of two-dimensional image scans to produce a set of patches each having the same dimensions ([0101] The patches 25 in a respective patch gallery 25g view can have the same area (height by width dimensions) and each patch 25 can correspond to the same size area of the WSI image.), each patch from the set of patches including image data from a different portion of the set of two-dimensional image scans ([0099] In a respective patch gallery 25g, the patches 25 can be associated with different areas in an ROI 15r. The ROI 15r can be many times (10?-100? or more) larger in size than the patches 25 of the WSI image 15.); identify, using a machine learning algorithm ("[0095] The term “positivity” and derivatives thereof refers to a probability of a positive association with a clinical disease state such as cancer, for example, that can be identified by a stained color of a nuclei. The typical case is for cancer detection, diagnosis or evaluation, where nuclei relevant for disease assessment have been given a distinct staining, often a brown color. [0106] FIG. 9 illustrates that the circuit 10c can calculate probability using a machine learning system 66 that can evaluate measured properties of respective patches 25 input to the machine learning system 66, such as shape, intensity, volume, area, color and the like and output a calculated probability including, for example, a nucleus probability (a probability that the detected nuclei is actually a nuclei) and a positivity probability (a probability that the detected nuclei is positively stained).") and (2) include at least one feature that is a suspected ([0189] FIG. 17 is a flow chart of actions/operations that can occur to reclassify patches 25 based on user-input and can optionally retrain a machine learning model for the classification according to embodiments of the present invention.); and generate a consolidated view of the subset of patches, such that each patch in the consolidated view includes at least one feature that is a suspected abnormality ([0104] The patches 25 can be electronically divided and/or pre-sorted into a plurality of different primary classes based on the presence of a cell nucleus and a positivity probability and may exclude nuclei outside a visible span (block 200). [0105] This sorting procedure is representative of a positive count evaluation of cells but other sorting procedures can be used for other evaluations of a patch gallery content, such as mitosis as shown in FIGS. 5A and 5B where there is no particular order but detected mitoses can be shown side by side in the patches displayed to exclude false positives. [0106] FIG. 9 illustrates that the circuit 10c can calculate probability using a machine learning system 66 that can evaluate measured properties of respective patches 25 input to the machine learning system 66, such as shape, intensity, volume, area, color and the like and output a calculated probability including, for example, a nucleus probability (a probability that the detected nuclei is actually a nuclei) and a positivity probability (a probability that the detected nuclei is positively stained).)), in which the subset of patches are arranged according to a predefined layout ([0040] The at least one panel of the patch gallery can be arranged in an order from left to right and/or up to down, with decreasing probability values of positivity and/or cell nuclei decreasing to the right and/or down.). Molin does not disclose or teach and wherein each patch from the set of patches is associated with a spatial location in the tissue sample, the subset of patches arranged in the consolidated view such that patches from the subset of patches having spatial locations proximate to one another are arranged adjacent to one another and link portions of the consolidated view to portions of the image data to switch between the consolidated view and a portion of the image data including a patch included in the consolidated view. In a similar field of endeavor of medical microscopy imaging, Hall teaches and wherein each patch from the set of patches is associated with a spatial location in the tissue sample ([0077] Thus, for example, different tissue samples from different cut locations can have unique electronic identifiers that correlate a tissue sample Ts with a physical cut location 10 and/or the corresponding virtual cut location 110.), the subset of patches arranged in the consolidated view such that patches from the subset of patches having spatial locations proximate to one another are arranged adjacent to one another ([0082] For example, all slides from the same cut location, typically some with different stains, may be grouped together, e.g., adjacent each other, shown only in the thumbnail set 200 (with other slides from other cut locations omitted) and/or concurrently shown with all slides but emphasized with a common color background and/or perimeter, with an adjacent icon or with an overlay.); and link portions of the consolidated view to portions of the image data to switch between the consolidated view and a portion of the image data including a patch included in the consolidated view ([0081] The cut location marks 110 on a respective macroscopic image or model M may be configured as active or inactive objects or links. For active objects or links, a user may select (e.g., click or touch) a particular cut location mark 110 on image or model M on the display 35 and the viewer V can automatically present WSI slides S associated with the selected cut location mark. The resulting presentation on the display 35 can be to provide the relevant slides S in a new larger viewing window, in a concurrent adjacent or overlay window and/or along one side of the macroscopic image M. A set of thumbnail WSI images 200 of different slides S may be concurrently displayed along a side, top or bottom of the screen. The slide Sa being currently viewed by the viewer V can be highlighted, shown as color coded perimeter matching a color of the visually enhanced cut 110a (cut A). The set of slides 200 may be displayed via a pull down option, for example, or in subsets of slides, relative to a particular block B or a particular cut location, for example. [0108] FIGS. 8 and 10 illustrate an optional use of tether lines 88 that connect a cut location 110 on the macroscopic image M to a thumbnail slide image T and/or the larger displayed slide sample S. The visual tethers may be faded, dimmed or selected via a UI by a user. Typically, the tethers 88 are not required and may be used via a selection by a user or may be omitted as an option from the viewer V. Where used, the tethers may be just visual tethers or may be active links that allow for more information to be provided to a user for a particular cut location and/or slide. [0109] FIGS. 9 and 10 illustrate that a viewer V can show two large views of slides S.sub.1, S.sub.2 in high magnification and the corresponding cut location marks 110a (“A” and “E”) can be visually enhanced relative to the other cut mark locations. The cut location marks 110a that are related to the current large slide views S.sub.1, S.sub.2 can be shown as bold, with increased intensity and/or in solid line and/or with a different color relative to the other cut location marks 110 which can also be shown in broken lines. Here, each relevant cut mark location 110a is shown in a brighter, increased intensity and in a different solid line color on the image M.). PNG media_image5.png 607 497 media_image5.png Greyscale (in fig 9, t1 and t2 are different views of the different portions of a cell. They are in a consolidated view (portions of images stitched together in a row). T1 is shown as s1 (can be seen as a patch of the reference image with the red arrows) while t2 is shown as s2.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Molin and Hall do not explicitly disclose or teach and wherein the machine learning algorithm is trained on previously labeled volumetric imaging data to detect regions of diagnostic relevance, wherein said machine learning algorithm distinguishes between benign and malignant tissue features. Molin implicitly discloses wherein the machine learning algorithm is trained on previously labeled volumetric imaging data to detect regions of diagnostic relevance, wherein said machine learning algorithm distinguishes between benign and malignant tissue features ([0189] FIG. 17 is a flow chart of actions/operations that can occur to reclassify patches 25 based on user-input and can optionally retrain a machine learning model for the classification according to embodiments of the present invention.). However, in a similar field of endeavor of endeavor class-aware adversarial pulmonary nodule synthesis, Yang explicitly teaches, in better detail, and wherein the machine learning algorithm is trained on previously labeled volumetric imaging data to detect regions of diagnostic relevance ([0023] FIG. 1 shows an exemplary computed tomography (CT) medical image 100 of a patient showing pulmonary nodule 102. Pulmonary nodule 102 in image 100 is a malignant pulmonary nodule. Image 100 may be part of a training dataset for training a machine learning network for performing an image analysis task such as, e.g., classifying a nodule as benign or malignant.), wherein said machine learning algorithm distinguishes between benign and malignant tissue features ([0038] At step 404, the target nodule is classified using a trained machine learning network trained based on a synthesized medical image patch. The target nodule may be classified as one of malignant or benign. In one embodiment, the trained machine learning network is a deep 3D CNN pre-trained for natural video classification, however any suitable machine learning network may be employed. In one embodiment, the synthesized medical image patch is generated according to the steps of method 200 of FIG. 2.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin and Hall’s disclosure of spatial location of cell patches, with Yang’s teaching of benign and malignant classification, by using a synthesized medical image patch that includes the unmasked portion of the initial medical image patch and a synthesized nodule replacing the masked portion of the initial medical image patch in order to reduce processing time (abstract and [0003]). Regarding claims 20, Molin does not disclose but Hall teaches wherein the consolidated view of the set of patches is displayed in a first area of the user interface, the method further comprising (fig. 10, ref t1, t2): displaying in one or more second areas of the user interface at least one of: a perspective view of the tissue sample, or a two-dimensional view of the tissue sample at a predefined depth (fig. 10, ref s1, s2 (close up view of t1, t2)). PNG media_image7.png 299 473 media_image7.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin’s disclosure of tissue image patching, with Hall’s teaching of spatial location of cell patches, in order to automatically generate physical or digital markings on the macroscopic images ([0006]). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Molin (US 20180089496 A1), in view of Hall (US 20150279026 A1), in view of Yang (US 20200402215 A1), in view of Avanaki (US 20200359887 A1), and further in view of Agus (US 20200388028 A1). Regarding claim 16, Molin, Hall, Yang and Avanaki do not disclose or teach wherein the consolidated view of the subset of patches includes information associated with each patch from the subject of patches, the information including at least one of: a ductal carcinoma in situ (DCIS) score, a confidence value associated with the DCIS score, spatial location information. In a similar field of endeavor of diagnosing, theragnosing, and classifying types of cancer, Agus teaches wherein the consolidated view of the subset of patches includes information associated with each patch from the subject of patches, the information including at least one of: a ductal carcinoma in situ (DCIS) score ([0073] The network's performance on the DCIS dataset was intriguing. While the network was trained on IDC images, the high accuracy on DCIS images may be explained by a several factors. Biologically, it has been noted that there are morphometric similarities between DCIS and IDC17, thus patterns learned on IDC may apply to DCIS. Another explanation is that the co-occurrence of DCIS and IDC in some of the IDC training images allowed the network to learn patterns for DCIS and IDC simultaneously. An alternative explanation is that the higher accuracy on the DCIS dataset was due to the method of dataset preparation. Images in the DCIS dataset were carefully chosen by pathologists and contain little stroma relative to epithelial tissue. On the other hand, images in the IDC datasets were obtained from a commercial tissue microarray supplier. These cores are large, diverse (containing a mixture of stromal and epithelial tissue), and noisy (exhibiting varying degrees of staining artifact). Thus, biologic or region-selection factors may explain the accuracy on the DCIS dataset.), a confidence value associated with the DCIS score, spatial location information. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Molin, Hall, Yang and Avanaki’s disclosure of benign and malignant classification using OCT b-scan images, with Agus’s teaching of a ductal carcinoma in situ diagnosis, in order to extract latent information from basic imaging techniques to predict molecular level information that captures the underlying biology of the cancer ([0005]). Response to Arguments Applicant’s arguments, see page 3, filed 03/11/2026, with respect to the rejection(s) of claim(s) 1, 14, 18 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Avanaki (US 20200359887 A1). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20230389799 A1 to claim 1: Claim 9. The method of claim 8, wherein the cross-sectional intravascular image is an optical coherence tomography image, an optical frequency domain imaging image, or an intravascular ultrasound image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A NASHER whose telephone number is (571)272-1885. The examiner can normally be reached Mon - Fri 0800 - 1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHMED A NASHER/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Feb 01, 2022
Application Filed
Jul 27, 2024
Non-Final Rejection — §103
Dec 06, 2024
Response Filed
Feb 18, 2025
Final Rejection — §103
Jun 17, 2025
Response after Non-Final Action
Jul 22, 2025
Request for Continued Examination
Jul 23, 2025
Response after Non-Final Action
Jul 31, 2025
Non-Final Rejection — §103
Dec 03, 2025
Response Filed
Jan 10, 2026
Final Rejection — §103
Jan 29, 2026
Interview Requested
Feb 19, 2026
Applicant Interview (Telephonic)
Feb 25, 2026
Examiner Interview Summary
Mar 12, 2026
Request for Continued Examination
Mar 16, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601840
TUNING PARAMETER DETERMINATION METHOD FOR TRACKING AN OBJECT, A GROUP DENSITY-BASED CLUSTERING METHOD, AN OBJECT TRACKING METHOD, AND AN OBJECT TRACKING APPARATUS USING A LIDAR SENSOR
2y 5m to grant Granted Apr 14, 2026
Patent 12586329
MODELING METHOD, DEVICE, AND SYSTEM FOR THREE-DIMENSIONAL HEAD MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12582373
GENERATING SYNTHETIC ELECTRON DENSITY IMAGES FROM MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12567255
FEW-SHOT VIDEO CLASSIFICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561965
NEURAL NETWORK CACHING FOR VIDEO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+34.4%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month