Prosecution Insights
Last updated: April 19, 2026
Application No. 18/535,589

METHOD FOR CLOSING INCOMPLETE AREA OF INTEREST MARKINGS AND FOR FACILITATING TISSUE TREATMENT

Non-Final OA §103
Filed
Dec 11, 2023
Examiner
CODRINGTON, SHANE WRENSFORD
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Tecan Trading AG
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
15
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/11/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. The information disclosure statement (IDS) submitted on 02/01/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: Figure 3a label 320 is not described in the specification. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 19-22, 24, 25, 28, 33, 35 are rejected under 35 U.S.C. 103 as being unpatentable over Kono et al (Kono hereinafter, US 20120076420 A1) in view of Fuchs et al (Fuchs hereinafter, US 11682186 B2). As per claim 19, Kono teaches detecting in the image endpoints of the one or more contours ( Paragraph [0047] “interpolation-line generating unit 18 performs end-point detection processing to detect end points of the contour edges” also found in figure 7 step c1), applying a score to pairs of detected endpoints indicating a likelihood that the pair of endpoints correspond to the same contour (Paragraph [0080] “the endpoint selecting unit 182 performs endpoint pair selection processing (Fig 7 step c3)…when a sum of gradient costs at respective control points on an interpolation line that is generated for the end-point pair selected as above is greater than a threshold, it is determined that the processing at Step c9 is not performed, and therefore, the processing returns to Step c3. In this case, an end-point pair is re-selected…” For each candidate interpolation line an evaluation value is computed that infers the connection with the best evaluation value (gradient cost) corresponding to endpoint pairs are observed. The gradient cost is a score used to determine endpoint pairs and their selection/reselection. This score can also be seen in Konos claim 5 and claim 13.), selecting using the score a pair of endpoints corresponding to the same partial contour (Paragraph [0080] “when a sum of gradient costs at respective control points on an interpolation line that is generated for the end-point pair selected as above is greater than a threshold, it is determined that the processing at Step c9 is not performed, and therefore, the processing returns to Step c3. In this case, an end-point pair is re-selected by replacing the connection-destination end point for the end point of the contour edge (the connection-base end point) with an end point of another contour edge.” Paragraphs [0115]- [0122] describe selection pipeline in detail in regards to a second embodiment. Moreover Paragraph [0082] states that “in actual processing, for example, a distance from the connection-base end point or a gradient change direction at the connection-base end point is used as a parameter, and end points of contour edges other than the connection-base end point are selected one by one as the connection-destination end point on the basis of a value of the parameter every time the processing at Step c3 is performed.” Distance and or gradient change direction are incorporated into the Gradient cost score ) and drawing in the image a segment connecting the two endpoints in the selected pair obtaining a contour enclosing an area (Figure 7 label c5, Figure 9 label L4 , Figure 10 Label L4, Paragraph [0062] "…and a line segment connecting the connection-base end point P41 and the connection-destination end point P42 on the intersecting line as indicated by a chain line in FIGS. 9 and 10 are generated as an initial interpolation line L4.”) Kono does not teach obtaining an image of a tissue section together with area of interest markings comprising one or more contours, a contour in the image at least partially surrounding an area of the image, at least one of the one or more contours being a partial contour which does not fully enclose an area, applying a machine learned model to the image of the tissue section to obtain a tissue image comprising the tissue, and determination for the contour enclosing an area an intersection between the enclosed area and the tissue in the tissue image, and removing the contour if an area of the intersection is below a threshold. Fuchs teaches a obtaining an image of a tissue section together with area of interest markings comprising one or more contours, a contour in the image at least partially surrounding an area of the image, at least one of the one or more contours being a partial contour which does not fully enclose an area (Figure 1, figure 2 , Methods paragraph 32), applying a machine learned model to the image of the tissue section to obtain a tissue image comprising the tissue (Paragraph 38 “All metrics were calculated using the Scikit-learn package in Python”, Summary - “…the computing system may provide the image identifying the second subset of pixels as the ROI to train a machine-learning model for at least one of image segmentation, image localization, or image classification.” Description Paragraph 18- “The annotation mask from step (v) can then be used for machine learning and computer vision pipelines.” Description-Paragraph 82 “Upon receipt, the model trainer system 710 may train the model 760 to learn to perform image segmentation, image localization, or image classification. The model 760 may be a machine learning (ML) model”) and determination for the contour enclosing an area an intersection between the enclosed area and the tissue in the tissue image, and removing the contour if an area of the intersection is below a threshold (figure 2 and figure 4 pipeline, Fuchs multiplies tissue mask by annotation mask to obtain the content of only the annotated region. Description Paragraph 4 [0004] “Step 7: A multiplication of the tissue mask (step 3) and the annotation mask (step 6) forms the final output mask.” The overlap between the AOI region and tissue mask is the final output mask and is the "intersection between the enclosed area and the tissue in the tissue image". Then in step 5 and 6 Fuchs filters small noisy regions based on size and removes them. Paragraph [0009] “A noise filter removes small regions based on size. The pen mask is then subtracted from the contour mask to obtain the content of the annotated region only.” Paragraph [0040] “To reduce noise in the filled contours, components smaller than 3,000 pixels are filtered. This threshold was chosen as it worked best on the data set by filtering small regions such as unrelated pixels, small contours, and text regions while letting tissue annotations pass.”) Accordingly, it would have been obvious to one of ordinary skill in the art at the time the invention was effectively filed to have modified Kono (detecting endpoints, scoring endpoint pairs, selecting endpoint pairs based off a score and drawing a connecting segment so that each AOI contour is closed around a region of interest.) with Fuchs (directed to processing annotated regions on pathology tissue images by applying a machine learned tissue segmentation model, computing intersection area between the annotation and the tissue region and filtering/discarding annotations whose intersection with tissue are below a threshold) and arrived at completing incomplete AOI markings on tissue images by detecting endpoints, scoring endpoints pairs, selecting endpoint pairs, drawing a connecting segment to close the contour applying a machine learned model to obtain a tissue image, determining the intersection between each closed contour and the tissue ,and removing contours whose intersection area is below a threshold. All the limitations of claim 19 One of ordinary skill in the art would have been motivated to modify Kono with Fuchs because once contours have been completed by identifying and connecting endpoints, it is desirable in the field of digital pathology to ensure that only images that contain sufficient data of completed, accurate contours with viable tissue/contour intersections are carried forward for analysis be it through machine learning or human observation and interpretation. The score correlated to endpoints allows for correct pairing of contours and avoids mismatch. This would be desirable because this saves processing time and it avoids generating masks for areas that are too small, irrelevant, contain no data, and have unintended contour/endpoint matching. A person of ordinary skill in the art would find it reasonable to apply this filtration/removal of contours via threshold step to Kono’s workflow and met limitations to achieve a more streamlined and augmented AOI-completion that would be deemed as clinically useful. The machine learning aspect is an obvious improvement as models will be able to learn differentiation in intended/unintended AOI markings to make inferences between human annotated tissue slide images in efforts to further streamline the clinical process. As per claim 20, Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Kono alone teaches the additional limitation of dependent claim 20. Kono further teaches determining for an endpoint a direction of the corresponding contour at the endpoint (Paragraph [0116] in “second embodiment” section in description. Endpoint pair selection uses “distance from the connection base endpoint or a gradient change direction at the connection base end point as parameters…are selected as the connection destination end points”) and the score for a pair of endpoints depending on a difference between the directions of the endpoints in the pair (Paragraph [0055. “In the contour-candidate-edge detection processing, a gradient magnitude of each pixel is calculated. Then, a position (ridge) at which a change in a gradient is the greatest in each gradient range, in which a gradient direction is identical, is detected on the basis of the calculated gradient magnitude of each pixel, so that contour candidate edges are obtained.”, Kono’s Claim 13 states “wherein the interpolation-line optimizing unit calculates the cost value on the basis of a direction of the contour edge at each pixel on each of the interpolation lines and on the basis of a direction in which a change in the gradients of the pixels on each of the interpolation lines becomes greatest.” The “[gradient]cost value on the basis of a direction” is the score depending on the difference between direction. If they are satisfactory the endpoints on the interpolation line are chosen as the official endpoints. We see this in his spec at Paragraph [0095] “difference between the direction of the contour edge and the gradient change direction is calculated, and a gradient cost is calculated so that the value of the gradient cost decreases as the difference increases and the value of the gradient cost increases as the difference decreases. For example, an inverse of the difference is calculated and set as the value of the gradient cost.” This directly shows that the score (gradient cost) is dependent on the direction. The lowest of gradient costs will select the proper interpolation line among the plurality of interpolation lines and those endpoints out of the plurality of endpoints are chosen) It would have been obvious to a person of ordinary skill in the art at the time the invention was effectively filed to add to the modified Kono/Fuchs pipeline previously described in claim 19s 103 rejection, endpoint direction-based scoring when completing AOI pen contours so that only plausible endpoint pairs with similar contour directions are connected to ensure clean AOI regions. A person of ordinary skill in the art in the field of digital pathology and or image processing would be motivated to use Kono’s methods of direction aware scoring in addition to the modified pipeline design because the risk of incorrect joining of endpoint pairing when relying solely on other factors i.e. proximity, will be mitigated. This will lead to more natural and pathologist intended AOI extraction. In the proposed modified Kono/Fuchs pipeline, if proximity was looked at alone, and there were several endpoints close to each other that didn’t correspond to the true/preferred AOI contours you can easily have the wrong pair. Even more so if there isn’t a scoring system between direction. The addition of determining direction of contours at the endpoint and scoring them allow for tracing methods that are dependent on local information, and therefore susceptible to noise, ensures that edges that need to be connected are detected. As per claim 21, Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Kono alone teaches the additional claim limitation of claim 21, a distance term and a direction term ([0082] “when both of the distance from the connection-base end point and the gradient change direction of the connection-base end point are used as two parameters, end points, which are located…are sequentially selected as the connection-destination end point from among the end points (C(x, y)=2) of the contour edge, in order from the nearest to the connection-base end point”, claim 6 “end-point-pair selecting unit selects the end points of the different contour edges as the connection destinations on the basis of at least one of a distance from the end point of the contour edge as the connection base and a direction of a gradient at the end point of the contour edge as the connection base.”) comprising a score (Paragraph [0080] “In the end-point-pair selection processing…a sum of gradient costs at respective control points on an interpolation line that is generated for the end-point pair selected as above is greater than a threshold, it is determined that the processing at Step c9 is not performed, and therefore, the processing returns to Step c3. In this case, an end-point pair is re-selected” the aforementioned endpoint pair selecting unit bases the final endpoint connections off the gradient cost. Before this could happen distance and direction of contours and their endpoints that were plausibly suitable needed to be incorporated. The gradient cost (score) is obtained after the distance and direction terms for each plausible endpoint is set. Accordingly, it would have been obvious for a person of ordinary skill in the art to frame the score using Kono’s specific distance/direction parameters so that the endpoint pairs that are both near and aligned to corresponding distance and direction are the favored pair. This produces more accurate closed pen annotations that would be adopted before Fuchs’s tissue intersections step. This would improve robustness and mitigate the possibility of misconnection at times where sparse tissue or a rough manual annotation (which Fuchs identifies as a possible quality loss in extraction) by using Kono’s distance/direction term parameters to address this. Kono’s parameters would provide Fuchs a consistent connection throughout the process further augmenting the method. As per claim 22, Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Kono does not teach using a machine learned model to directly output the contour/annotation image. Fuchs teaches using machine learning model 765 (Description, section B. Systems and Methods for Identifying Marked Regions of Interests (RoIs) in Images paragraph 54) trained on images of tissue sections to derive segmentations masks of AOI. Accordingly, it would have been obvious for a person of ordinary skill trained in the art to modify, by addition, Kono’s method of contour extraction with the machine learning concept Fuchs discloses to output machine learned AOI images and use those images as the “image comprising one or more contours” that would be further processed for endpoint completion per Kono. A person of ordinary skill in the art would be motivated to do this because Fuchs frames the ML based outputs as a way to get accurate digital AOI masks from the tissue images for computational use and Kono’s endpoint completion method can operate on any contour representation. A machine learned based contour image would be a straightforward improves on the mitigation of noise and variation as well as help filter out images in regards to pathologist error lower time consumption and reduce redundancy; problems Fuchs expressly notes would arise without. As per claim 24, Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Kono further teaches detecting junctions where contour bifurcates and the removal of one or more endpoints to eliminate bifurcation (Figure 17 label f6, Paragraph [0081] "According to the first modification, after the line-edge extracting unit 161 extracts line edges, the branched-edge removing unit 163a performs branched-edge removal processing (Step f6).” Figure 18 - a diagram explaining the branched-edge removal processing. In (a) of FIG. 18, a line edge EL7 extracted at Step b5 of FIG. 17 is illustrated. In the branched-edge removal processing, an end point of the extracted line edge is detected, and a branch point of the line edge is also detected. “Branched edge” is analogous to “bifurcation” as “branch point of the line edge” is analogous to a “junction”. Accordingly, it would have been obvious to one of ordinary skill in the art to add this element to the modified Kono/Fuchs pipeline so that the system detects junctions where a contour bifurcates and then removes endpoints associated with those junctions in order to eliminate spurs and other bits of noise coming from a contour. Its worthy to note that Fuchs (in which Kono’s method is modified by) expresses filtering out small contours such as small strokes which can create branches, bifurcations and all other unwanted artifacts in an AOI. This allows for a cleaner AOI. A person of ordinary skill in the art would be motivated to remove small contours (which can create branching and bifurcations) from the AOI described in the modified Kono/Fuchs workflow by tacking on Kono’s already disclosed bifurcation detection and elimination so that only true AOI boundary of the annotated tissue sample remain. This in turn improves fidelity of the completed AOI in comparison to the pathologist’s intent. It also gives more accurate datasets for the machine learning model to make and infer correct decisions and avoids data that could create bias in learning. As per claim 25, Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Kono further teaches in figure 18 and its description in paragraph [0104] that an endpoint is detected as a branch point. If the segment is shorter than the threshold that branch portion is removed. Contour is effectively transversed to a branch point (junction) and then transversed to an endpoint where the shortest segment (closest determined endpoint) from junction is deleted. Accordingly, it would have been obvious to one of ordinary skill in the art at the time the invention was effectively filed to have come to the concept of after detecting junctions, to traverse along each branch to its endpoint, then remove the endpoint closest to the junction from consideration to eliminate small clinically irrelevant branches or nubs whilst retaining the longer branch endpoints for improved/ real AOI completion. This would be a straightforward and streamlined application of standard skeleton graph traversal techniques. As per claim 28, Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Claim 28 will be rejected under the same claim limitation rejections described in claim 19. Specifically, in regards to claim 19’s limitation language “drawing in the image a segment connecting the two endpoints in the selected pair obtaining a contour enclosing an area”. There is no delineating difference between this specific limitation in claim 19 and the limitations in dependent claim 28. As per claim 33 Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection See claim 19 rejections. Both Kono (figure 24 and 25) and Fuchs (figure 10) teach computer implementation (i.e. a “system…comprising; one or more processors and; and one or more storage devices” that execute claim 19’s method/limitations) A person of ordinary skill in the art would naturally use a computer which has these functions inherently to perform the method claimed. As per Claim 35. Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Non transitory computer readable medium executing instructions claim 19’s method via processor. Both Kono (figure 24 label 460) and Fuchs (Description section C Computer network environment paragraph 101) both discuss this limitation. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Kono et al in view of Fuchs et al in further view of Qureshi et all (Qureshi hereinafter, US 7627157) Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Kono nor Fuchs disclose skeletonizing an image before detecting endpoints. Qureshi teaches skeletonizing images before detecting endpoints (Figure 10 and Summary paragraph [0025] “generating a skeleton of the segmented image of the FU, and determining from the skeleton at least one end point of the FU” to clarify “FU” stands for “follicular unit” Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention was effectively filed to modify the combined method of Kono and Fuchs by skeletonizing the AOI contour image before detecting endpoints as shown by Qureshi. A person with ordinary skill in the art would have recognized and would have been motivated by the fact that this modification simplifies as well as stabilizes endpoint detection since each contour branch becomes a single pixel path with multiple well-defined terminuses. It also yields more reliable endpoints for the scoring concept and connection logic described in the Kono/Fuchs’s method thereby improving the quality of the completed AOI contours used later for tissue intersection and size-threshold filtering Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Kono et al in view of Fuchs et al in further view of the Department of Artificial Intelligence in the University of Edinburgh’s Hypermedia Image Processing Reference hereinafter Edinburgh’s HIPR Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection. See claim 19’s 103 rejection. Kono nor Fuchs teach using a hit or miss kernel on an image in order to detect endpoints or junctions Edinburgh’s HIPR teaches using a hit or miss kernel to find both endpoints and junctions (Figure 4 under Guidelines for use “Some applications of the hit-and-miss transform. 1 is used to locate isolated points in a binary image. 2 is used to locate the end points on a binary skeleton …. 3a, 3b, and 3c are the kernels used to locate the triple points (junctions) on a skeleton.) Accordingly, it would have been obvious at the time the invention was effectively filed to modify the combined Kono and Fuchs method by using a hit-or-transform with a set of kernels with Kono’s endpoint/junction detection arriving at the subject matter in claim 26. A person of ordinary skill in the art would see hit-or-miss transform as a straightforward addition in the combined pipeline. This would give the practitioner another varying way other than skeletonizing to find endpoints and junctions which would yield a variety of data to examine accuracies and a larger selection of image mapping variants to use in the learning model increasing its efficiency. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Kono et al in view of Fuchs et al in further view of M. Shpitalni et al (hereinafter Shpitalni, “Classification of Sketch Strokes and Corner Detection using Conic Sections and Adaptive Clustering”) Kono nor Fuchs teach removing endpoints closer than a threshold being connected and removed before selecting a pair of endpoints using the score. Shpitalni teaches endpoints closer than a threshold are connected (Entity Linking and Endpoint Clustering paragraph 2 “In essence, the method is based on computing tolerance zones around each endpoint in the drawing, where the size of the zone corresponds to the uncertainty in the endpoint position. When the size of the gap between two endpoints is less than the expected error in placement of both the endpoints, it is likely that the endpoints were meant to coincide. Based on this reasoning, endpoint pairs are clustered when each member of the pair falls within the uncertainty zone of the other member… when two endpoints belonging to different groups are clustered, their associated groups are united. This procedure results in clusters of raw vertices. Each cluster will finally be represented by one vertex whose coordinates are at the average of centers.” Essentially points that are two closes together fall into a tolerance zone (a radius correlated to a distance linking threshold and or a tolerance threshold) and are clustered together (connected). They are then converted into one endpoint effectively removing the two previous endpoints. This can be seen visually in figure 3) Accordingly, a person of ordinary skill in the art would have further modified the Kono/Fuchs workflow to incorporate Shpitalni’s tolerance zone/cluster endpoint consumption. A person of ordinary skill in the art would see this as advantageous because these endpoints that are closer than the threshold can be dropped out of the dataset before scoring. By deleting the very obvious non endpoint candidates, (those that are too close) a person of ordinary skill in the art would reserve scoring-based selection for more ambiguous non trivially separated endpoint pairs in the tissue slides AOI. This in turn would of course reduce time and save more computational resources. Claim 29 is rejected under 35 U.S.C. 103 as being unpatentable over Kono et al in view of Fuchs et al in further view of Thiagarajan et al (hereinafter Thiagarajan “Explanation and Use of Uncertainty Quantified by Bayesian Neural Network Classifiers for Breast Histopathology Images”) Kono nor Fuchs teach determining a confidence value for drawn in image, displaying the drawn in image to a user for user confirmation if a comparison between the confidence value and a threshold value shows low confidence. Thiagarajan teaches determining a confidence value (Section 2.3.3. Uncertainty Quantification “The variance of the predictive distribution can be calculated by Eq. 8 which provide us with the confidence of the network in making predictions for a given image.”, Section 3.5 “The uncertainty values associated with an individual image denote the confidence with which the network predicts the class label for that image”) and displaying the drawn in image to a user for user confirmation if a comparison between he confidence value and a threshold value shows low confidence (Section 3.5. Use of uncertainty to improve performance “the Bayesian approach provides an avenue to identify which images should be referred back to a human expert.” and “if the threshold value is 0.6, the low uncertainty subset contains 77% of the test data and its accuracy is 94.6% which is about 6% improvement in accuracy over the entire test data set. The remaining 23% of the test data, which has uncertainty higher than this threshold (0.6) may be referred to a human expert for a more accurate prediction”) When it comes to displaying it to the user figures 8 ad 9 show images displayed on a computer. These images show their uncertainties, (confidence values) and if any are below the threshold it will be “referred to a human expert for a more accurate prediction”. Since those images were previously processed and learned via a computerized system with a display it is safe to conclude that when a human needs to confirm and or make a more accurate decision it will be displayed to them some way. A person of ordinary skill in the art would be motivated to further modify the Kono/Fuchs pipeline with Thiagarajan. This is due to the fact that when Kono/Fuchs pipeline attempts to create masks, noisy and incorrect contour completion will lead to mislabeling of regions and this will increase false negatives and false positives downstream. The practitioner would realize that determining a confidence value for each AOI and comparing it to a threshold will streamline mostly if only plausibly incorrect images. The human intervention restricts the system from using these plausible mismatched images blindly or dumping out an image automatically that might have been useful image. This added failsafe improves the reliability of the AOI and will reduce the risk of passing on erroneous annotations into the downstream computations. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over Kono et al in view of Fuchs et al in further view of Farahani et al (hereinafter Farahani, “Three-dimensional Imaging and Scanning: Current and Future Applications for Pathology”) Kono in view of Fuchs teach all the claim elements previously stated in claim 19’s 103 rejection See claim 19 rejections. Kono nor Fuchs teach 3D printing design for a 3D printed mask from the closed contours, for application on the tissue, the mask comprising barriers that define a cavity surrounding an area of interest on the tissue section, the cavity being open at a side facing the area of interest on the tissue section. Farahani teaches 3D printing design for a 3D printed mask (Introduction “Within the context of pathology, volumetric display, and mesh reconstruction techniques are particularly alluring for examination of clinical tissue specimens. 3D imaging could also enhance the study of disease processes… emphasis on techniques that enable 3D histopathologic reconstruction, such as serial 2D scanning, 3D scanning, and whole slide imaging (WSI)… applications of current and novel 3D imaging methods within the context of pathology are also addressed.” Whole slide imaging section and Figure 1 “These serial digital images are then run through commercially available or custom software, to generate 3D models [Figure 1]… Examples of WSI-compatible 3D reconstruction software include Voloom (microDimensions, Munich, Germany) and Image-Pro Premier 3D (Media Cybernetics, Rockville, MD, USA)…reconstruction software involves the following steps: Registration, segmentation, interpolation, and volumetric rendering” Laser scanning section, “Data collected from the scanner is then used to construct a 3D mesh, which can be printed using various additive manufacturing (3D printing) methods” Table 3, Table 4 regarding what can be printed and file formatting respectively, Practical Applications section, “Research and clinical pathology both use 3D reconstruction of whole slide images. Recent clinical examples include classification of lung adenocarcinomas, diagnosis of colorectal pathologies… used for several reasons, including the modeling of intricate anatomical structures, planning of complex surgical procedures…investigations into the use of 3D printing in anatomic pathology have driven the use of 3D scanners for gross surgical specimen capture”). Farahani is teaching that whole slide images can be scanned (if not already in a database), segmented (mask) and converted into 3D design in order to print various 3D structures for various non limiting procedures in the medical field. Accordingly, a person of ordinary skill in the art would have continued to alter the Kono/Fuchs pipeline to further generate a 3D design for a 3D printed mask from the resulting closed contours as recited in claim 30. Kono and Fuchs already closed AOI outlines on a tissue section image, restricted to a tissue regions, that precisely localize where an intervention should occur. Farahani describes in the context of pathology, that whole slide imaging and three-dimensional reconstruction software are not used only to visualize tissue in 3D but also in non-limiting examples to derive 3D printed masks and guides for surgical procedures. The specific geometry of the 3D printed structure such as “barriers that define a cavity surrounding an area of interest on the tissue section” and “the cavity being open at a side facing the area of interest on the tissue section” are design choices that the 3D printing software can handle. A skilled person in the art would treat the completed AOI contours from Kono/Fuchs as input boundaries for reconstruction and would generate a 3D mask whose barriers and cavities follow those AOI boundaries. A person of ordinary skill in the art would realize if they already have a high quality completed AOI contour on a WSI (Kono/Fuchs) and the field they are in are using WSI-derived 3D reconstructions to create 3D printed masks, that the AOI contours should be used and inputted as design boundaries for a 3D printed mask that can be physically aligned to the same tissue section. This would give the practitioner one to one spatial correspondence since the mask is derived directly from the same AOI contours used in the previous pipeline, repeatable and localized treatment sampling which is more precise, reduced error and a tight nit integration between digital and physical workflows. The flow from completed AOI contour into a 3D reconstructed and printed pipeline to make a tissue mask is a natural progression. Allowable Subject Matter Claim, 31,32 and 34 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE WRENSFORD CODRINGTON whose telephone number is (571)272-8130. The examiner can normally be reached 8:00am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHANE WRENSFORD CODRINGTON/ Examiner, Art Unit 2667 /TOM Y LU/ Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Dec 12, 2025
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month