Prosecution Insights
Last updated: April 19, 2026
Application No. 18/711,053

MACHINE LEARNING TECHNIQUES FOR TERTIARY LYMPHOID STRUCTURE (TLS) DETECTION

Non-Final OA §103§DP
Filed
May 16, 2024
Examiner
KAUR, JASPREET
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Bostongene Corporation
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
13 granted / 16 resolved
+19.3% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
53.2%
+13.2% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of this application being a National Stage application of the PCT Application No. PCT/US2023/013050, filed on February 14, 2023. As well as acknowledgement of priority to US provisional application 63/310072 filed on February 14, 2022. Information Disclosure Statement The information disclosure statement (“IDS”) filed on 05/16/2024 and 11/27/2024 (containing co-pending applications) has been reviewed and the listed references have been considered. The IDS filed on 11/27/2024 contains reference to an NPL Loshchilov et al. “Decoupled weight decay regularization” in not considered as copy is not providing pertaining to the version cited in the IDS. The IDS has been placed on the record, but the Loshchilov et al. “Decoupled weight decay regularization” is not been considered. Drawings The 31-page drawings have been considered and placed on record in the file. Status of Claims Claims 1-2, 4, 10-16, 18-20, 28-29, 32-34 are pending. Claims 3, 5-9, 17, 21-27, and 30-31 are cancelled. Claim Objections Claims 4, 13-15 and 20 are objected to because of the following informalities: Claim 4 recites "wherein the image is…" should be "wherein the image of the tissue is…" Claims 13-15 recites "…having an input couple to the output of…" should be "having an input couple to an output of…" Claim 20 recites "at least one feature selected from the group consisting of…" should be "at least one feature selections from a group consisting of…" Appropriate corrections are required. Claim Interpretation Claim 4 has been given the following interpretation under broadest reasonable interpretation in light of the specification. Claim 4 recites an arbitrary value of pixels per channel. The specification does not provide the significance of the value of “10,000 by 10,000 pixel values per channel”. However, the specification does state “For example, the image 106 may have at least 100,000x100,000 pixel values per channel, 75,000x75,000 pixel values per channel, 50,000x50,000 pixel values per channel, 25,000x25,000 pixel values per channel, 10,000x10,000 pixel values per channel, 5,000x5,000 pixel values per channel or any other suitable number of pixels per channel. The dimensions of image 106 may be within any suitable range such as, for example, 50,000-500,000 x 50,000-10 500,000 pixel values per channel, 25,000-1 million x 25,000-1 million pixel values per channel, 5,000-2 million x 5,000-2 million pixel values per channel, or any other suitable range within these ranges” on page 13 lines 3-12, these values are illustrated by way of example and one of ordinary skill in the art may consider any number of pixels of an image. For completeness and compact prosecution Examiner has interpretated claim 4 as a RGB/colored image containing one or more pixels in each channel. Claim 10 has been given the following interpretation under broadest reasonable interpretation in light of the specification. Claim 10 recites an arbitrary number of parameters for a neural network. The specification does not provide the significance of the value of “least 10 million, at least 25 million, at least 50 million, or at least 100 million parameters”. However, the specification does state “In some embodiments, the neural network 300 may include at least 1 million parameters, at least 5 million parameters, at least 10 million parameters, at least 15 million parameters, at least 20 million parameters, at least 25 million parameters, at least 30 million parameters, at least 50 million parameters, at least 75 million parameters, at least 100 million parameters, at least 150 million parameters, between 5 and 100 million parameters, between 25 and 75 million parameters, between 30 and 50 million parameters or any other range within these ranges” on page 31 lines 24-30, these values are illustrated by way of example and one of ordinary skill in the art may consider any number of parameters for a trained neural network. For completeness and compact prosecution Examiner has interpretated claim 10 as a neural network trained with a variety of parameters. Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claim 20 recites “identifying at least one feature selected from the group consisting of” then lists “a number of TLSs in at least the portion of the image,” “the number of TLSs in at least the portion of the image normalized by area of at least the portion of the image,” “a total area of TLSs in at least the portion of the image, the total area of TLSs in at least the portion of the image normalized by the area of at least the portion of the image,” “median area of TLSs in at least the portion of the image,” “the median area of TLSs in at least the portion of the image normalized by the area of at least the portion of the image.” Since “at least one feature selected from the group consisting of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim 28 is similarly interpretated as one of A, B OR C as the disjunctive interpretation enjoys the most specification support, therefore any one of the elements found in the prior art is sufficient to reject the claim. Claim 32 has been given the following interpretation under broadest reasonable interpretation in light of the specification. Claim 32 recites an arbitrary percentage of an image. The specification does not provide the significance of the value of at least 75%, at least 80%, at least 90%, at least 95%, at least 99%, or 100%. However, the specification does state “The portion of the image covered by the set of overlapping sub-images may include any suitable portion such as, for example, at least 5% of the image, at least 10%, at least 25%, at least 50%, at least 60%, at least 75%, at least 80%, at least 90%, at least 95%, at least 98%, 100%, between 5% and 100%, between 25% and 80%, or any other suitable portion, as aspects of the technology are not limited in this respect” on page 21 lines 5-9, these percentages are illustrated by way of example and one of ordinary skill in the art may consider any part of an image as a portion or part of an image. For completeness and compact prosecution Examiner has interpretated claim 32 as any section of an image in part or in whole. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 UFR 3.73(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 4, 10-16, 18-20, 28-29, and 32-34 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims of co-pending Application No. 18/918,401. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are obvious variants of the corresponding ones in the co-pending application, in view of Li (WO 2021/236547 A1), in further view of Bernard et al. (US 2008/0037853 A1). This is a provisional nonstatutory obviousness-type double patenting rejection because the patentably indistinct claims have not in fact been patented. For example, the following is a chart comparing claim 1 of the instant application to the claim 1 of the co-pending application number 18/918,401: Instant application: 18/711,053 U.S. Application No.: 18/918,401 Although the co-pending U.S. Application 18/918,401 discloses a method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis of an image, identifying boundaries of the TLS identified using the mask, and identifying features of the TLS using the boundary and image. The co-pending application 18/918,401 does not disclose “obtaining a set of overlapping sub-images of the image of tissue” and “determining a pixel-level mask for at least a portion of the image of the tissue covered by at least some of the sub-images in the set of overlapping sub-image”. However, Li teaches “determining a pixel-level mask (Li paragraph [0037] "The digital pathology image processing system can generate a mask, or other pattern storage data structure, from recognized patterns") for at least a portion of the image of the tissue covered by at least some of the sub-images in the set of overlapping sub-images”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis of an image, identifying boundaries of the TLS identified using the mask, and identifying features of the TLS using the boundary and image of claim 1 of U.S. Application 18/918,401 to include image masking to identify a biological object as taught by Li. The suggestion/motivation for doing so would have been “However, this image-level approach can strip detail from the image, which may impede detecting details of a depicted circumstance and/or environment. This simplification can be particularly impactful in the digital pathology context, as the current or potential future activity of particular types of cells can heavily depend on a microenvironment. Therefore, it would be advantageous to develop techniques to process digital pathology images to generate an output reflective of a spatial characterization of depicted biological objects” as noted by Li disclosure in paragraphs 4 and 5. The combination of the co-pending U.S. Application 18/918,401 and Li does not disclose “obtaining a set of overlapping sub-images of the image of tissue”. However, Bernard teaches “obtaining a set of overlapping sub-images of the image of tissue (Bernard paragraph [0073] "The control logic unit applies a sliding-window filtering to each region of pixels in order to compute the set of mean values of the grey levels in an immediate environment of each region. This type of sliding-window filtering is described with reference to FIG. 8")”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis of an image, identifying boundaries of the TLS identified using the mask, and identifying features of the TLS using the boundary and image of claim 1 of U.S. Application 18/918,401 and Li to include overlapping sub-images as taught by Bernard. The suggestion/motivation for doing so would have been “This algorithm for computation of the background intensity is implemented by means of sliding-window lb filtering. This type of sliding-window filtering computes all the mean values of grey levels in an immediate environment of the region of elements. The algorithm determines a background intensity to be assigned to the region of lb elements from the set of mean values of grey levels computed in this immediate environment This determining is lb. done as a function of criteria enabling an improved estimation of the contrast of the regions of elements. Preferably, the background intensity is estimated as the minimum mean lb value of grey levels, measured in the immediate environment of the region of elements. The use of these sliding windows to compute the background intensity facilitates the algorithm while at the lb same time reducing the computation time in an embodiment of a method of the invention” as noted by Bernard disclosure in paragraphs 24 and 24. Therefore, it would be obvious to combine the disclosure of U.S. Application 18/918,401 and Li with the Bernard disclosure to obtain the invention as specified in the instant application claim 1 as there is reasonable expectation of success and/or because doing so merely combines prior art elements according to known method to yield predictable results. Claims 2, 4, 10-16, 18-20, 28-29, and 32-34 are similarly rejected under nonstatuory obviousness-type double patenting as being unpatentable over claims of co-pending U.S. Application 18/918,401 in view of Li and in further view of Bernard. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 4, 10, 18, 20, 28-29, and 32-34 are rejected under 35 U.S.C. 103 as being unpatentable over Li (WO 2021/236547 A1 – From IDS dated 05/16/2024) in view of Bernard et al. (US 2008/0037853 A1). Regarding claim 1, Li teaches “A method for using a trained neural network model to identify at least one tertiary lymphoid structure (TLS) (Li paragraph [0027] “processing of digital pathology images can be performed to estimate whether a given image includes depictions of a particular type or class of biological object”) in an image of tissue obtained from a subject having, at risk of having, or suspected of having cancer (Li paragraph [0006] "a computer-implemented method is provided that includes a digital pathology image processing system accessing a digital pathology image that depicts a section of a biological sample from a subject"), the method comprising: using at least one computer hardware processor (Li paragraph [0010] "present disclosure include a system including one or more data processors") to perform: (Bernard paragraph [0073] "The control logic unit applies a sliding-window filtering to each region of pixels in order to compute the set of mean values of the grey levels in an immediate environment of each region. This type of sliding-window filtering is described with reference to FIG. 8"); processing the set of overlapping sub-images using the trained neural network model to obtain a respective set of pixel-level sub-image masks (Li paragraph [0037] "The digital pathology image processing system can generate a mask, or other pattern storage data structure, from recognized patterns"), each of the set of pixel-level sub- image masks indicating, for each particular pixel of multiple individual pixels in a respective particular sub-image, a respective probability that the particular pixel is part of a tertiary lymphoid structure (Li paragraph [0036] "A machine-learning model or rule can be used to generate a result corresponding, for example, to a diagnosis, prognosis, treatment evaluation, treatment selection, treatment eligibility (e.g., eligibility to be accepted or recommended for a clinical trial or a particular arm of a clinical trial), and/or prediction of a genetic mutation, gene alteration, biomarker expression levels (including, but not limited to genes or proteins), etc. , using one or more metrics, which each correspond to a metric type of one or more metric types"); determining a pixel-level mask (Li paragraph [0037] "The digital pathology image processing system can generate a mask, or other pattern storage data structure, from recognized patterns") for at least a portion of the image of the tissue covered by at least some of the sub-images in the (Li paragraph [0037] "A digital pathology image processing system can further identify and learn to recognize patterns of locations and relationships of detected biological object depictions based in part on one or more spatial-distribution metrics. For example, the digital pathology image processing system can detect patterns of locations and relationships of detected biological object depictions in a digital pathology image of a first sample. The digital pathology image processing system can generate a mask, or other pattern storage data structure, from recognized patterns"); identifying boundaries of at least one TLS in at least the portion of the image using the pixel-level mask (Li paragraph [0074] "Biological object detector sub-system 145 can use static rules and/or a trained model to detect and characterize biological objects. Rules-based biological object detection can include detecting one or more edges, identifying a subset of edges that are sufficiently connected and closed in shape, and/or detecting one or more high-intensity regions or pixels. A portion of a digital pathology image can be determined to depict a biological object if, for example, an area of a region within a closed edge is within a predefined range and/or if a high intensity region has a size within a predefined range"); and identifying one or more features of the at least one TLS using the identified boundaries (Li paragraph [0047] "the first type of biological object can include a first class of biological object defined, for example, by feature characteristics of a first type (e.g., size, shape, color, expected behavior, texture, of the biological object or a component or compartment of the biological object) and the second type of biological object can include a second class of biological object defined, for example by feature characteristics of a second type or feature characteristics of a variation of the first type") and at least the portion of the image (Li paragraph [0048] "generating the one or more spatial-distribution metrics of the first type can include identifying, for each first biological object depiction of the one or more first biological object depiction, a first point location within the one or more digital pathology images"). However, Li is not relied on to teach “obtaining a set of overlapping sub-images of the image of tissue”. Bernard teaches “obtaining a set of overlapping sub-images of the image of tissue (Bernard paragraph [0073] "The control logic unit applies a sliding-window filtering to each region of pixels in order to compute the set of mean values of the grey levels in an immediate environment of each region. This type of sliding-window filtering is described with reference to FIG. 8")”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis of an image as taught by Li to include analysis using a sliding window as taught by Bernard. The suggestion/motivation for doing so would have been “This algorithm for computation of the background intensity is implemented by means of sliding-window lb filtering. This type of sliding-window filtering computes all the mean values of grey levels in an immediate environment of the region of elements. The algorithm determines a background intensity to be assigned to the region of lb elements from the set of mean values of grey levels computed in this immediate environment This determining is lb. done as a function of criteria enabling an improved estimation of the contrast of the regions of elements. Preferably, the background intensity is estimated as the minimum mean lb value of grey levels, measured in the immediate environment of the region of elements. The use of these sliding windows to compute the background intensity facilitates the algorithm while at the lb same time reducing the computation time in an embodiment of a method of the invention” as noted by Bernard disclosure in paragraphs 24 and 24. Therefore, it would have been obvious to combine the disclosure of Li with the Bernard disclosure to obtain the invention as specified in claim 1 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Regarding claim 2, the combination of Li and Bernard teaches “The method of claim 1, wherein the image of the tissue (Li paragraph [0173] "Tissue samples were collected at baseline. For each subject in each treatment arm, digital pathology (e.g., H&E pathology) images can be captured of baseline tissue samples") is a whole slide image (WSI) (Li paragraph [0173] "Regions relating to one or more depictions of biological objects on the digital pathology images (also referred to as whole slide images or "WSI") were annotated").” Regarding claim 4, the combination of Li and Bernard teaches “The method of claim 1 or any other preceding claim, wherein the image is a three-channel image (Li paragraph [0065] "When multiple stains are used, the stains can be selected to have different color profiles, such that a first region of an image corresponding to a first section portion that absorbed a large amount of a first stain can be distinguished from a second region of the image ( or a different image) corresponding to a second section portion that absorbed a large amount of a second stain") comprising at least 10,000 by 10,000 pixel values per channel (Li paragraph [0056] and paragraph [00108] "The term "biological object depiction," as referred to herein, can refer to a particular portion of an image (e.g., one or more pixels, a defined regions of the image, etc.) that is or has been identified as corresponding to a particular type of biological object" and "detected biological object depiction can correspond to a 5x5 square of pixels within the image under analysis").” Regarding claim 10, the combination of Li and Bernard teaches “The method of claim 1, wherein the trained neural network model comprises at least 10 million, at least 25 million, at least 50 million, or at least 100 million parameters (Li paragraph [0102] and [0103] "Learned parameters (e.g., corresponding to one or more weights, thresholds, coefficients, etc.) can be stored in a ML model parameter data store 298" and "In particular embodiments, part or all of one or more sub-systems are trained using part or all of a same set of training data used to train a ML model (to thereby learn ML model parameters stores in ML model parameter data store 298)").” Regarding claim 18, the combination of Li and Bernard teaches “The method of claim 1, wherein determining the pixel-level mask for at least the portion of the image by at least some of the sub-images comprises: determining weighting matrices (Li paragraph [0094] "for each lattice region, an intensity metric can indicate and/or can be based on a quantity of biological object depictions of each of one or more types having point locations (e.g., for at least a threshold portion of the biological object depictions) within the region") for the at least some of the set of pixel-level sub-image masks (Li paragraph [0037] "The digital pathology image processing system can generate a mask, or other pattern storage data structure, from recognized patterns"); and determining the pixel-level mask as a weighted combination of the pixel-level sub-image masks weighted, element-wise, by the respective weighting matrices (Li paragraph [0094] "the intensity metrics can be normalized and/or weighted based on a total number of biological objects (e.g., of a given type) detected within the digital pathology image and/or for the sample; counts of biological objects of the given type detected in other samples; and/or a scale of the digital pathology image").” Regarding claim 20, the combination of Li and Bernard teaches “The method of claim 1, wherein identifying the one or more features of the at least one TLS (Li paragraph [0047] "the first type of biological object can include a first class of biological object defined, for example, by feature characteristics of a first type (e.g., size, shape, color, expected behavior, texture, of the biological object or a component or compartment of the biological object) and the second type of biological object can include a second class of biological object defined, for example by feature characteristics of a second type or feature characteristics of a variation of the first type") comprises identifying at least one feature selected from the group consisting of: a number of TLSs in at least the portion of the image, the number of TLSs in at least the portion of the image normalized by area of at least the portion of the image, a total area of TLSs in at least the portion of the image, the total area of TLSs in at least the portion of the image normalized by the area of at least the portion of the image, median area of TLSs in at least the portion of the image, the median area of TLSs in at least the portion of the image normalized by the area of at least the portion of the image (Li paragraph [0094] "the intensity metrics can be normalized and/or weighted based on a total number of biological objects (e.g., of a given type) detected within the digital pathology image and/or for the sample; counts of biological objects of the given type detected in other samples; and/or a scale of the digital pathology image").” Regarding claim 28, the combination of Li and Bernard teaches “The method of claim 1, wherein the cancer (Li paragraph [0055] "the medical condition can be a type of cancer and/or the particular treatment can be an immune-checkpoint-blockade treatment") is lung adenocarcinoma, breast cancer, cervical squamous cell carcinoma, lung squamous cell carcinoma, head & neck squamous cell carcinoma, gastric adenocarcinoma, colorectal adenocarcinoma (Li paragraph [00161] "regression machine-learning model can be trained predict, based on a digital pathology image of a biopsy section from a subject diagnosed with colorectal cancer"), liver adenocarcinoma, pancreatic adenocarcinoma, or melanoma (Li paragraph [0040] "Patterns detected from digital pathology images can be associated with a context that includes, for example, the type of sample which the digital pathology image depicts (e.g.,lung biopsy, liver tissue sample, blood sample, formalin fixed paraffin embedded specimen, frozen specimen, cell preparations obtained from surgical exhaeresis, biopsy procedures, including but not limited to core needle biopsy fine needle aspirate, etc., from various organs, tumors, and/or metastasis sties, etc.)").” Regarding claim 29, the combination of Li and Bernard teaches “The method of claim 1 or any other preceding claim, further comprising: determining, based on the one or more features of the at least one TLS (Li paragraph [0027] “processing of digital pathology images can be performed to estimate whether a given image includes depictions of a particular type or class of biological object”), to administer an immunotherapy to the subject (Li paragraph [00112] "immunotherapy and/or checkpoint immune therapy can be identified as a treatment recommendation when spatial-distribution metrics indicate that lymphocytes are close to and/or co-localized with tumor cells"); and administering the immunotherapy to the subject (Li paragraph [0114] "As a result of the recommendation, a treatment of the subject can be initiated, changed or halted. For example, a recommended treatment can be initiated, and/or an approved treatment of a particular disease can be initiated in response to diagnosing the subject with the particular disease").” Regarding claim 32, the combination of Li and Bernard teaches “The method of claim 1, wherein at least the portion of the image includes at least 75%, at least 80%, at least 90%, at least 95%, at least 99%, or 100% of pixels of the image (Li paragraph [0056] and paragraph [0065] "The term "biological object depiction," as referred to herein, can refer to a particular portion of an image (e.g., one or more pixels, a defined regions of the image, etc.) that is or has been identified as corresponding to a particular type of biological object" and "The image scanner 125 can capture the digital image at multiple levels of magnification (e.g., using a lOx objective, 20x objective, 40x objective, etc.). Manipulation of the image can be used to capture a selected portion of the sample at the desired range of magnifications").” Claim 33 recites a computer readable medium including computer executable instructions corresponding to the steps of the method recited in claim 1. Therefore, the recited instructions of the computer readable medium of claim 33 are mapped to the proposed combination in the same manner as the corresponding steps of the method claim 1. Additionally, the rationale and motivation to combine Li and Bernard presented in rejection of claim 1, apply to this claim. Finally, the combination of Li and Bernard teaches “At least one non-transitory computer readable storage medium storing processor executable instructions that, when executed by at least one processor, cause the at least one processor to perform the method (Li paragraph [0008] "system is provided that includes one or more data processors and a non-transitory computer readable storage medium containing instructions which, when executed on the one or more data processors, cause the one or more data processors to perform part or all of one or more methods disclosed herein") for using a trained neural network model to identify at least one tertiary lymphoid structure (TLS) in an image of tissue obtained from a subject having, at risk of having, or suspected of having cancer (Li paragraph [0056] "Biological object depictions can be identified using a machine-learning algorithm, one or more static rules, and/or computer-vision techniques")”. Claim 34 recites a system with elements corresponding to the method with steps recited in claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps of method claim 1. Additionally, the rationale and motivation to combine the Li and Bernard references, presented in rejection of claim 1 apply to this claim. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Li and Bernard in view of Chen et al. ("Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation"). Claim 11, the combination of Li and Bernard teaches the method of claim 1. However, the combination of Li and Bernard is not relied on to teach “an encoder sub-model, a decoder sub-model, and an auxiliary classifier sub-model”. Chen teaches “an encoder sub-model, a decoder sub-model (Chen Figure 2 and page 4 paragraph 4 "Typically, the encoder-decoder networks contain (1) an encoder module that gradually reduces the feature maps and captures higher semantic information, and (2) a decoder module that gradually recovers the spatial information. Building on top of this idea, we propose to use DeepLabv3 [23] as the encoder module and add a simple yet effective decoder module to obtain sharper segmentations"), and an auxiliary classifier sub-model (Chen page 5 paragraph 3 "For the task of image classification, the spatial resolution of the final feature maps is usually 32 times smaller than the input image resolution and thus output stride = 32").”. PNG media_image1.png 245 468 media_image1.png Greyscale Chen Figure 2 It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis using sliding window as taught by Li and Bernard to include an encoder, decoder, and classification model as taught by Chen. The suggestion/motivation for doing so would have been “encoder-decoder models [21,22] lend themselves to faster computation (since no features are dilated) in the encoder path and gradually recover sharp object boundaries in the decoder path. Attempting to combine the advantages from both methods, we propose to enrich the encoder module in the encoder-decoder networks by incorporating the multi-scale contextual information" as noted by Chen disclosure in page 2 paragraph 1. Therefore, it would have been obvious to combine the disclosure of Li and Bernard with the Chen disclosure to obtain the invention as specified in claim 11 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Li, Bernard, and Chen in view of Cheng et al. ("HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation"). Claim 12, the combination of Li, Bernard, and Chen teaches the method of claim 11. However, the combination of Li, Bernard, and Chen is not relied on to teach “wherein the encoder sub-model comprises: a plurality of resolution-separation neural network portions and a plurality of resolution-fusion neural network portions”. Cheng teaches “The method of claim 11, wherein the encoder sub-model comprises: a plurality of resolution-separation neural network portions (Cheng Figure 2 and page 3 right hand column paragraph 2 "In every following stage, a new branch is added to current branches in parallel with 1/2 of the lowest resolution in current branches. As the network has more stages, it will have more parallel branches with different resolutions and resolutions from previous stages are all preserved in later stages") and a plurality of resolution-fusion neural network portions (Cheng page 3 left hand column paragraph 3 "High-Resolution Network (HRNet) [38, 40] is proposed as an efficient way to keep a high resolution pass throughout the network. HRNet [38, 40] consists of multiple branches with different resolutions. Lower resolution branches capture contextual information and higher resolution branches preserve spatial information. With multi-scale fusions between branches, HRNet [38, 40] can generate high resolution feature maps with rich semantic").” PNG media_image2.png 355 690 media_image2.png Greyscale Cheng Figure 2 It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis using sliding window and encoder decoder neural network as taught by Li, Bernard, and Chen to include separation of resolution and fusion as taught by Cheng. The suggestion/motivation for doing so would have been “HigherHRNet generates high-resolution heatmaps by a new high-resolution feature pyramid module. Unlike the traditional feature pyramid that starts from 1/32 resolution and uses bilinear upsampling with lateral connection to gradually increases feature map resolution to 1/4, high resolution feature pyramid directly starts from 1/4 resolution which is the highest resolution feature in the backbone and generates even higher-resolution feature maps with deconvolution" as noted by Cheng disclosure in page 2 left hand column paragraph 3. Therefore, it would have been obvious to combine the disclosure of Li, Bernard, and Chen with the Cheng disclosure to obtain the invention as specified in claim 12 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claims 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Li, Bernard, Chen, and Cheng in view of Han et al. (US 2020/0272841 A1). Regarding claim 13, the combination of Li, Bernard, Chen, and Cheng teaches “The method of claim 11, wherein the encoder sub-model comprises: a first resolution-separation neural network portion having an input coupled to the output of the bottleneck neural network portion (Cheng Figure 2 and page 3 right hand column paragraph 2 "In every following stage,a new branch is added to current branches in parallel with 1/2 of the lowest resolution in current branches. As the network has more stages, it will have more parallel branches with different resolutions and resolutions from previous stages are all preserved in later stages"); a first resolution-fusion neural network portion having an input coupled to the output of the first resolution-separation neural network portion (Cheng page 3 left hand column paragraph 3 "High-Resolution Network (HRNet) [38, 40] is proposed as an efficient way to keep a high resolution pass throughout the network. HRNet [38, 40] consists of multiple branches with different resolutions. Lower resolution branches capture contextual information and higher resolution branches preserve spatial information. With multi-scale fusions between branches, HRNet [38, 40] can generate high resolution feature maps with rich semantic"); a second resolution-separation neural network portion having an input coupled to the output of the first resolution-fusion neural network portion (Cheng Figure 2 and page 3 right hand column paragraph 2 "In every following stage,a new branch is added to current branches in parallel with 1/2 of the lowest resolution in current branches. As the network has more stages, it will have more parallel branches with different resolutions and resolutions from previous stages are all preserved in later stages"); and a second resolution-fusion neural network portion having an input coupled to the output of the second resolution-separation neural network portion (Cheng page 3 left hand column paragraph 3 "High-Resolution Network (HRNet) [38, 40] is proposed as an efficient way to keep a high resolution pass throughout the network. HRNet [38, 40] consists of multiple branches with different resolutions. Lower resolution branches capture contextual information and higher resolution branches preserve spatial information. With multi-scale fusions between branches, HRNet [38, 40] can generate high resolution feature maps with rich semantic").“ However, the combination of Li, Bernard, Chen, and Cheng is not relied on to teach “an adapter neural network portion; a bottleneck neural network portion having an input coupled to the output of the adapter neural network portion”. Han teaches “an adapter neural network portion (Han paragraph [0135] "the Bottleneck structure 900 may include a plurality of sequentially connected layers including a convolutional layer 910, a batch normalization layer 920, an activation layer 930, a convolutional layer 940, a batch normalization layer 950, an activation layer 960, a convolutional layer 970, a batch normalization layer 980, and an activation layer 990"); a bottleneck neural network portion having an input coupled to the output of the adapter neural network portion (Han paragraph [0135] "the Bottleneck structure 900 may include a plurality of sequentially connected layers including a convolutional layer 910, a batch normalization layer 920, an activation layer 930, a convolutional layer 940, a batch normalization layer 950, an activation layer 960, a convolutional layer 970, a batch normalization layer 980, and an activation layer 990")“. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis of an image, identifying boundaries of the TLS identified using the mask, and identifying features of the TLS using the boundary and sub-images as taught by Li, Bernard, Chen, and Cheng to include bottleneck module as taught by Han. The suggestion/motivation for doing so would have been “The ROI segmentation model with a Bottleneck structure may have fewer model parameters and/or need less storage space than the original CNN model, thereby saving operation resources and improving system efficiency" as noted by Han disclosure in paragraph 54. Therefore, it would have been obvious to combine the disclosure of Li, Bernard, Chen, and Cheng with the Han disclosure to obtain the invention as specified in claim 13 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Regarding claim 14, the combination of Li, Bernard, Chen, Cheng, and Han teaches “The method of claim 13, wherein the encoder sub-model further comprises: a third resolution-separation neural network portion having an input coupled to the output of the second resolution-fusion neural network portion (Cheng Figure 2 and page 3 right hand column paragraph 2 "In every following stage,a new branch is added to current branches in parallel with 1/2 of the lowest resolution in current branches. As the network has more stages, it will have more parallel branches with different resolutions and resolutions from previous stages are all preserved in later stages"); and a third resolution-fusion neural network portion having an input coupled to the output of the third resolution-separation neural network portion (Cheng page 3 left hand column paragraph 3 "High-Resolution Network (HRNet) [38, 40] is proposed as an efficient way to keep a high resolution pass throughout the network. HRNet [38, 40] consists of multiple branches with different resolutions. Lower resolution branches capture contextual information and higher resolution branches preserve spatial information. With multi-scale fusions between branches, HRNet [38, 40] can generate high resolution feature maps with rich semantic").” The proposed combination as well as the motivation for combining Li, Bernard, Chen, Cheng, and Han references presented in the rejection of claim 13, applies to claim 14. Finally the method recited in claim 14 is met by Li, Bernard, Chen, Cheng, and Han. Regarding claim 15, the combination of Li, Bernard, Chen, Cheng, and Han teaches “The method of claim 11, wherein the decoder sub-model further comprises: an atrous spatial pyramid pooling (ASPP) neural network portion (Chen page 5 paragraph 3 "DeepLabv3 augments the Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales by applying atrous convolution with different rates, with the image-level fea-tures [52]. We use the last feature map before logits in the original DeepLabv3 as the encoder output in our proposed encoder-decoder structure"); an upsampling layer having an input coupled to the output of the ASPP neural network portion (Chen Figure 2 and page 6 paragraph 1 "The encoder features are first bilinearly upsampled by a factor of 4 and then concatenated with the corresponding low-level features [73] from the network backbone that have the same spatial resolution (e.., Conv2 before striding in ResNet-101 [25])"); a projection neural network portion (Han paragraph [0135] "the Bottleneck structure 900 may include a plurality of sequentially connected layers including a convolutional layer 910, a batch normalization layer 920, an activation layer 930, a convolutional layer 940, a batch normalization layer 950, an activation layer 960, a convolutional layer 970, a batch normalization layer 980, and an activation layer 990"); a classification neural network portion having an input coupled to the output of the upsampling layer and the projection neural network portion, wherein the classification neural network portion is configured to output a pixel-level mask, indicating, for each particular pixel of multiple individual pixels in an image being processed by the trained neural network model, a respective probability that the particular pixel is part of a tertiary lymphoid structure (Chen Figure 2 and page 6 paragraph 2 "The encoder features are first bilinearly upsampled by a factor of 4 and then concatenated with the corresponding low-level features [73] from the network backbone that have the same spatial resolution (e.g., Conv2 before striding in ResNet-101 [25]). We apply another 1 × 1 convolution on the low-level features to reduce the number of channels, since the corresponding low-level features usually contain a large number of channels (e.g., 256 or 512) which may outweigh the importance of the rich encoder features (only 256 channels in our model) and make the training harder. After the concatenation, we apply a few 3 × 3 convolutions to refine the features followed by another simple bilinear upsampling by a factor of 4").” The proposed combination as well as the motivation for combining Li, Bernard, Chen, Cheng, and Han references presented in the rejection of claim 13, applies to claim 15. Finally the method recited in claim 15 is met by Li, Bernard, Chen, Cheng, and Han. Regarding claim 16, the combination of Li, Bernard, Chen, Cheng, and Han teaches “The method of claim 14, wherein the decoder sub-model further comprises: an atrous spatial pyramid pooling (ASPP) neural network portion having an input coupled to an output of the third resolution-fusion neural network portion (Chen page 5 paragraph 3 "DeepLabv3 augments the Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales by applying atrous convolution with different rates, with the image-level fea-tures [52]. We use the last feature map before logits in the original DeepLabv3 as the encoder output in our proposed encoder-decoder structure"); an upsampling layer having an input coupled to the output of the ASPP neural network portion (Chen Figure 2 and page 6 paragraph 1 "The encoder features are first bilinearly upsampled by a factor of 4 and then concatenated with the corresponding low-level features [73] from the network backbone that have the same spatial resolution (e.., Conv2 before striding in ResNet-101 [25])"); a projection neural network portion having an input coupled to an output of the bottleneck neural network portion (Han paragraph [0135] "the Bottleneck structure 900 may include a plurality of sequentially connected layers including a convolutional layer 910, a batch normalization layer 920, an activation layer 930, a convolutional layer 940, a batch normalization layer 950, an activation layer 960, a convolutional layer 970, a batch normalization layer 980, and an activation layer 990"); and a classification neural network portion having an input coupled to the output of the upsampling layer and the projection neural network portion, wherein the classification neural network portion is configured to output a pixel-level mask, indicating, for each particular pixel of multiple individual pixels in an image being processed by the trained neural network model, a respective probability that the particular pixel is part of a tertiary lymphoid structure (Chen Figure 2 and page 6 paragraph 2 "The encoder features are first bilinearly upsampled by a factor of 4 and then concatenated with the corresponding low-level features [73] from the network backbone that have the same spatial resolution (e.g., Conv2 before striding in ResNet-101 [25]). We apply another 1 × 1 convolution on the low-level features to reduce the number of channels, since the corresponding low-level features usually contain a large number of channels (e.g., 256 or 512) which may outweigh the importance of the rich encoder features (only 256 channels in our model) and make the training harder. After the concatenation, we apply a few 3 × 3 convolutions to refine the features followed by another simple bilinear upsampling by a factor of 4").“ The proposed combination as well as the motivation for combining Li, Bernard, Chen, Cheng, and Han references presented in the rejection of claim 13, applies to claim 16. Finally the method recited in claim 16 is met by Li, Bernard, Chen, Cheng, and Han. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Li and Bernard in view of Suzuki et al. ("Topological Structural Analysis of Digitized Binary Images by Border Following"). Claim 19, the combination of Li and Bernard teaches “The method of claim 1, wherein identifying the boundaries of the at least one TLS in at least the portion of the image comprises: generating a binary version (Li paragraph [0094] "a binary metric can include a determination whether a lattice region is associated with a number of biological object depictions satisfying a threshold value (e.g., whether there were at least five tumor cells assigned to the region)") of the pixel-level mask (Li paragraph [0081] "the subject-level label generator sub-system 155 can retrieve masks according to one or more rules or using a trained model. For example, a rule can indicate that a particular mask or subset of masks are to be retrieved and compared to a digital pathology image in response to a determination of one or more types of one or more biological objects depicted in the digital pathology image. As another example, a rule can indicate that a particular mask or subset of masks are to be retrieved and compared to a digital pathology image in response to a determination of a spatial-distribution metric satisfying, or failing to satisfy, a threshold value or occupying, or failing to occupy, a threshold range"); and identifying contours of the at least one TLS by applying a border-following algorithm (Suzuki page 4 paragraph 4 "a border following algorithm for topological structural analysis. This extracts the surroundness relation among the borders of a binary picture") to the binary version of the pixel-level mask (Li paragraph [0108] "The location metadata can include (for example) a set of coordinates corresponding to a point within the image, coordinates corresponding to an edge or border of the biological object depiction and/or coordinates corresponding to an area of the depicted object").“ However, the combination of Li and Bernard is not relied on to teach “identifying contours of the at least one TLS by applying a border-following algorithm”. Suzuki teaches “identifying contours of the at least one TLS by applying a border-following algorithm (Suzuki page 4 paragraph 4 "a border following algorithm for topological structural analysis. This extracts the surroundness relation among the borders of a binary picture").” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the method of identifying tertiary lymphoid structure (TLS) using pixel-wise analysis using sliding window as taught by Li and Bernard to include a border following algorithm as taught by Suzuki. The suggestion/motivation for doing so would have been “These algorithms can be effectively used in component counting, shrinking, and topological structural analysis of binary image, when a sequential digital computer is used" as noted by Suzuki disclosure in the abstract. Therefore, it would have been obvious to combine the disclosure of Li and Bernard with the Suzuki disclosure to obtain the invention as specified in claim 19 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 7:30 am - 4:00 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASPREET KAUR/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596301
RETICLE INSPECTION AND PURGING METHOD AND TOOL
2y 5m to grant Granted Apr 07, 2026
Patent 12555199
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM, WITH SYNTHESIS OF TWO INFERENCE RESULTS ABOUT AN IDENTICAL FRAME AND WITH INITIALIZING OF RECURRENT INFORMATION
2y 5m to grant Granted Feb 17, 2026
Patent 12513319
END-TO-END INSTANCE-SEPARABLE SEMANTIC-IMAGE JOINT CODEC SYSTEM AND METHOD
2y 5m to grant Granted Dec 30, 2025
Patent 12427606
SYSTEMS AND METHODS FOR NON-DESTRUCTIVELY TESTING STATOR WELD QUALITY AND EPOXY THICKNESS
2y 5m to grant Granted Sep 30, 2025
Patent 12421641
LAUNDRY TREATMENT APPLIANCE AND METHOD OF USING THE SAME ACCORDING TO MATCHED LAUNDRY LOADS
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+30.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month