Prosecution Insights
Last updated: April 19, 2026
Application No. 18/774,574

METHODS AND APPARATUS FOR ADAPTIVE SLIDE IMAGING USING A SELECTED SCANNING PROFILE

Final Rejection §103
Filed
Jul 16, 2024
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Pramana Inc.
OA Round
4 (Final)
61%
Grant Probability
Moderate
5-6
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the amendments and remarks received 29 November 2025. Claims 1 - 20 are currently pending. Claim Objections Claim 1 is objected to because of the following informalities: Lines 16 - 18 of claim 1 recite, in part, “as a function of the classification category of the slide; and image, using the optical system” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --as a function of the classification category of the slide; [[and]] image, using the optical system-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 11 is objected to because of the following informalities: Lines 15 - 16 of claim 11 recite, in part, “classification category of the slide; and imaging, using the optical system” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --classification category of the slide; [[and]] imaging, using the optical system-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Response to Arguments Applicant's arguments filed 29 November 2025 have been fully considered but they are not persuasive. On pages 10 - 12 of the remarks the Applicant’s Representative argues that amended claims 1 and 11 are patentably distinguishable over the previously cited prior art, D’Costa et al., Putman et al. and Torre-Bueno, at least because the Office has not demonstrated that the previously cited prior art discloses “configure, using a set of application programming interfaces (APIs), download of one or more software containers for inline compute, wherein the scanning profile is associated with the one or more software containers; configure, using the APIs, an algorithm pipeline for processing the macro image of the slide; and configure, using the APIs, an algorithm pipeline for processing a high magnification image of the slide”. The Applicant’s Representative argues that D’Costa et al. cannot teach the aforementioned disputed claim limitation(s) at least because the previous Office Action acknowledged that D’Costa et al. do not expressly teach “capturing and processing a macro image of the slide” and fail to disclose explicitly “imaging the slide using a pipeline comprising a custom 4x magnification pipeline, wherein the custom 4x magnification pipeline comprises a tumor classification module”. In addition, the Applicant’s Representative argues that Putman et al. cannot teach the aforementioned disputed claim limitation(s) at least because the previous Office Action acknowledged that Putman et al. “fail to disclose expressly imaging the slide using a pipeline comprising a custom 4x magnification pipeline and a custom 40x magnification pipeline” and “fail to disclose explicitly wherein the custom 4x magnification pipeline comprises a tumor classification module.” Therefore, the Applicant’s Representative argues that amended claims 1 and 11 are patentably distinguishable over the previously cited prior art at least because the Office has not demonstrated that the previously cited prior art discloses the aforementioned disputed claim limitation(s). The Examiner respectfully disagrees. Initially, the Examiner asserts that Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Additionally, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Furthermore, the Examiner asserts that at least D’Costa et al. disclose “configure, using a set of application programming interfaces (APIs), download of one or more software containers for inline compute, wherein the scanning profile is associated with the one or more software containers”, configure, using the APIs, an algorithm pipeline for processing the image of the slide, and “configure, using the APIs, an algorithm pipeline for processing a high magnification image of the slide”, see at least figures 1A - 2B, 2E and 4A - 7, page 1 paragraph 0007, page 3 paragraph 0017 and 0019 - 0020, page 5 paragraph 0048 - page 6 paragraph 0052, page 6 paragraphs 0054 - 0056, page 7 paragraph 0063, page 7 paragraph 0065 - page 8 paragraph 0073, page 9 paragraphs 0076 - 0078, page 10 paragraphs 0083 and 0085, page 11 paragraph 0096, page 12 paragraphs 0101 - 0102, page 13 paragraphs 0104, 0108 - 0109 and 0111, page 14 paragraphs 0116 - 0117 and page 15 paragraphs 0125 and 0127 of D'Costa et al. wherein they disclose that “the term ‘user interface’ refers to a program enabling a user, for example histologists, cytologists, pathologists, etc., to input commands and data and receive results” [0048], that “processing subsystem 102 can retrieve and execute instructions stored in storage subsystem 104” [0052], that “data and other information can be stored in one or more remote locations, e.g., cloud storage, and synchronized with other the components of the scanning system 100” [0054], that “storage subsystem 104 can store one or more software programs to be executed by processing subsystem 102, such as a scanning interface application 120 or an image viewer application… ‘software’ can also include firmware or embedded applications or any other type of instructions readable and executable by processing subsystem 102. Software can be implemented as a single program or a collection of separate programs or program modules that interact as desired. In some embodiments, programs and/or data can be stored in non-volatile storage and copied in whole or in part to volatile working memory during program execution. From storage subsystem 104, processing subsystem 102 can retrieve program instructions to execute and data to process in order to execute various operations including operations described below. Examples of software include, but are not limited to, scanning interface software (e.g. to select settings for scanning and to enable the acquisition of scanned image data) and image viewer software (e.g. to view and/or analyze scanned image data)” [0055], that in “some embodiments, at least one user configurable scanning setting may be automatically changed based on the automatically recognized slide label information. Indeed, barcode information could be used to establish any configurable scanner setting, including any of those described herein. For example, barcode info can indicate a certain type of slide (e.g., type of cancer and type of stain) and the scanning device can recognize that type of slide and set scanning settings associated with that specific type of slide. Scanning settings can include the number of layers in the Z-stack, AOI shape, AOI size, magnification, etc.” [0073], that in “some embodiments, signals are transmitted from interface module 126 to the scanning device 110 to initiate one or more scanning operations. In some embodiments, the interface module 126 sends signals to the scanning device 110 to automatically initiate the scanning of slides at one or more slides positions along with a set of predetermined scanning settings (such as a set of scanning settings stored in storage subsystem 104)” [0096] and that in “some embodiments, the signals corresponding to the selection of one or more user configurable scanning parameters or the to the adjustment of an area of interest or a focus point are then sent to the scanning device 110 such that image data may be rescanned according to the selected scanning settings (step 706)” [0117]. The Examiner asserts that, as shown herein above and in the cited portions, D’Costa et al. disclose that software programs can be stored in one or more remote locations, that software programs can be retrieved and copied to volatile working memory in order to execute various operations of their described invention, that scanning settings can be retrieved from remote storage and utilized to perform scanning of a microscope slide, that the scanning settings can include one or more of a focus method, an area of interest (AOI) detection method, a magnification level, a label anonymization feature, etc., that the scanning settings may be automatically selected based on a recognized type of slide, that magnification levels include 20x and 40x magnification, and that a user may select and/or alter scanning settings that are utilized to perform scanning of the slide. In addition, the Examiner notes that, in view of the instant disclosure, the broadest reasonable interpretation of an “application programming interface” (API) is taken to comprise any “protocol that is used by two or more applications to communicate with each other”, see at least pages 37 - 38 paragraph 0087 of the instant specification. The Examiner notes that D’Costa et al. fail to disclose expressly capturing and processing a macro image of the slide, i.e., D’Costa et al. fail to disclose expressly a macro image. However, analogous art Putman et al. disclose capturing and processing a macro image of the slide as well as “configure, using a set of application programming interfaces (APIs), download of one or more software containers for inline compute, wherein the scanning profile is associated with the one or more software containers; configure, using the APIs, an algorithm pipeline for processing the macro image of the slide; and configure, using the APIs, an algorithm pipeline for processing a high magnification image of the slide”, see at least figures 2, 11 and 15A - 16, page 1 paragraphs 0005 and 0013, page 3 paragraph 0059 - page 4 paragraph 0061, page 5 paragraphs 0071 - 0072, page 6 paragraph 0079 - page 7 paragraph 0083, page 7 paragraphs 0085 - 0086, page 8 paragraph 0094, page 10 paragraphs 0120 - 0121, page 11 paragraphs 0123 - 0126 and 0129, page 12 paragraph 0134 - page 13 paragraph 0141, page 13 paragraphs 0143 - 0145, page 14 paragraphs 0152, 0157 and 0159 - 0160 and page 15 paragraphs 0162 - 0163 and 0165 of Putman et al. wherein they disclose that “control system 70 includes a controller and controller interface, and can control any settings of macro inspection system 100 (e.g., intensity of lights, color of lights, turning on and off one or more lights, pivoting or other movement of one or more lights (e.g., changing a light's angle), movement of light ring assembly 80 (e.g., in a z direction), movement of imaging platform 44; movement of specimen stage 50 or 150 (in x, y, θ, and/or z directions), movement of lens 34 (in x, y, θ, and/or z directions), movement of imaging translation platform 44, recording of image data by imaging assembly 33, rotation or movement of imaging assembly 33, processing of illumination data, processing of image data)... In some embodiments, individual components within macro inspection system 100 can include their own software, firmware, and/or hardware to control the individual components and communicate with other components in macro inspection system 100” [0079], that “communication between the control system (e.g., the controller and controller interface) and the components of macro inspection system 100 can use any suitable communication technologies” [0080], that “control system 70 can activate and adjust the intensity, color and/or pitch of lights L1 to Ln, and/or the distance between the specimen stage and lens 34 according to a stored illumination profile that is selected for the specimen. The illumination profile can be selected manually or automatically based on a computer algorithm that assesses different attributes of the specimen (e.g., as determined by one or more physical and/or mechanical properties of a specimen) and/or different goals of the examination and finds a suitable illumination profile” [0129], that “an illumination profile can be automatically selected based on the specimen (or feature) classification and/or a particular stage in the manufacturing or examining process. The specimen/feature classification can be used to query an illumination profile database that contains one or more illumination profiles associated with specimen and/or specimen feature types. By referencing the specimen classification determined in step 1514, a matching illumination profile can be automatically identified and retrieved. As discussed above, the illumination profile can contain a variety of settings data that describe configurations of macro inspection system 100 that can be used to achieve the optimal illumination landscape for the specimen or feature being observed” [0136], that although “computer analysis system 75 is illustrated as a localized computing system in which various components are coupled via a bus 1605, it is understood that various components and functional computational units (modules) can be implemented as separate physical or virtual systems. For example, one or more components and/or modules can be implemented in physically separate and remote devices, such as, using virtual processes (e.g., virtual machines or containers) instantiated in a cloud environment” [0138], that “image processing system can be configured to classify specific specimen features, determine other physical and/or mechanical specimen properties (e.g., specimen reflectivity, specimen dimensions). Classifications of specimen types, and specimen features/properties can be stored as part of an illumination profile. As such, various illumination profiles stored in illumination profile database 1636 can contain settings and parameters used to generate an optimal illumination landscape that can be referenced and matched to a sample based on sample type and or specific features or characteristics” [0145], that once “the desired illumination profile has been selected, e.g., from illumination profile database 1636, the illumination profile data can be transmitted to control system 70. Control system 70 can use this information in connection with process 1400 to apply an illumination profile to illuminate a specimen being examined” [0152], that “it is appreciated that throughout the description, discussions utilizing terms such as ‘determining,’ ‘providing,’ ‘identifying,’ ‘comparing’ or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems” [0162] and that “it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products” [0165]. The Examiner asserts that, as shown herein above and in the cited portions, Putman et al. disclose capturing macro images of specimens, such as biological slides, for examination/inspection, that software programs for carrying out process steps of their invention can be downloaded over a network, that an illumination profile of settings for capturing an image(s) of a specimen for examination can be retrieved from a remote database based on classification of the specimen, that the illumination profile of settings can be transmitted to a control system of their macro inspection system for use in image capture and examination of the specimen and that some portions of their process steps can be performed simultaneously or in parallel. In addition, the Examiner notes that, in view of the instant disclosure, the broadest reasonable interpretation of an “application programming interface” (API) is taken to comprise any “protocol that is used by two or more applications to communicate with each other”, see at least pages 37 - 38 paragraph 0087 of the instant specification. Therefore, the Examiner asserts that D’Costa et al. in view of Putman et al. disclose the aforementioned disputed claim limitation(s). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 3, 6, 8, 10, 11, 13, 16, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over D’Costa et al. U.S. Publication No. 2020/0400930 A1 in view of Putman et al. U.S. Publication No. 2021/0042884 A1. - With regards to claims 1 and 11, D’Costa et al. disclose an apparatus and method for adaptive slide imaging using a selected scanning profile, (D’Costa et al., Abstract, Figs. 1A - 2B, 2E, 4A - 5B, 6 & 7, Pg. 1 ¶ 0007, Pg. 3 ¶ 0017, Pg. 5 ¶ 0050 - Pg. 6 ¶ 0055, Pg. 8 ¶ 0073, Pg. 9 ¶ 0077, Pg. 10 ¶ 0082, Pg. 13 ¶ 0106 - 0110, Pg. 14 ¶ 0114) the apparatus comprising: a scanner configured to capture an image of a slide, (D’Costa et al., Abstract, Figs. 1B - 2C, 6 & 7, Pg. 1 ¶ 0006, Pg. 2 ¶ 0012, Pg. 3 ¶ 0017, Pg. 5 ¶ 0050, Pg. 7 ¶ 0062 - 0064, Pg. 9 ¶ 0077 - 0078, Pg. 11 ¶ 0092, Pg. 12 ¶ 0101, Pg. 13 ¶ 0105) wherein the scanner comprises: a stage configured to hold the slide; (D’Costa et al., Figs. 2B, 3A, 3B, 6 & 7, Pg. 2 ¶ 0012, Pg. 10 ¶ 0086 - 0089, Pg. 11 ¶ 0091, Pg. 12 ¶ 0100, Pg. 14 ¶ 0117) an optical sensor configured to convert an image into one or more electrical signals; (D’Costa et al., Figs. 1B, 6 & 7, Pg. 5 ¶ 0044 - 0045, Pg. 7 ¶ 0062 - 0064) and an optical system configured to form the image of the slide on the optical sensor, (D’Costa et al., Figs. 1B, 6 & 7, Pg. 5 ¶ 0044 - 0045, Pg. 7 ¶ 0062 - 0064) wherein the stage is configured to move the slide relative to the optical system; (D’Costa et al., Pg. 7 ¶ 0064) at least a processor; (D’Costa et al., Figs. 1A & 1B, Pg. 2 ¶ 0012, Pg. 5 ¶ 0051 - Pg. 6 ¶ 0055, Pg. 7 ¶ 0067) and a memory, wherein the memory contains instructions configuring the at least a processor (D’Costa et al., Figs. 1A & 1B, Pg. 2 ¶ 0012 and 0014, Pg. 6 ¶ 0052 - 0055, Pg. 7 ¶ 0067, Pg. 16 ¶ 0138) to: receive the image of the slide from the scanner; (D’Costa et al., Abstract, Figs. 1B - 2B & 7, Pg. 5 ¶ 0044 - 0045, Pg. 7 ¶ 0060 - 0064, Pg. 8 ¶ 0070, Pg. 9 ¶ 0077, Pg. 11 ¶ 0092, Pg. 12 ¶ 0099 and 0102, Pg. 14 ¶ 0116 - 0117) extract metadata from the image of the slide; (D’Costa et al., Pg. 2 ¶ 0009 and 0013, Pg. 8 ¶ 0073 - 0074, Pg. 10 ¶ 0085, Pg. 11 ¶ 0092, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) determine a classification category of the slide as a function of the metadata; (D’Costa et al., Pg. 8 ¶ 0073 - 0074, Pg. 11 ¶ 0092, Pg. 13 ¶ 0108, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) retrieve a scanning profile as a function of the classification category of the slide; (D’Costa et al., Pg. 8 ¶ 0073 - 0074, Pg. 11 ¶ 0092, Pg. 13 ¶ 0108, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) and image, using the optical system and optical sensor of the scanner, the slide as a function of the scanning profile; (D’Costa et al., Abstract, Figs. 1B - 2B, 2E, 6 & 7, Pg. 3 ¶ 0017, Pg. 7 ¶ 0060 - 0064, Pg. 8 ¶ 0072 - 0074, Pg. 9 ¶ 0077 - 0078, Pg. 10 ¶ 0082 - 0083, Pg. 11 ¶ 0092, Pg. 13 ¶ 0108, Pg. 14 ¶ 0114 and 0116 - 0117, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) configure, using a set of application programming interfaces (APIs), download of one or more software containers for inline compute, (D’Costa et al., Figs. 1A - 2B, 2E & 4A - 7, Pg. 1 ¶ 0007, Pg. 3 ¶ 0017 and 0019 - 0020, Pg. 5 ¶ 0048 - Pg. 6 ¶ 0052, Pg. 6 ¶ 0054 - 0056, Pg. 7 ¶ 0063, Pg. 7 ¶ 0065 - Pg. 8 ¶ 0073, Pg. 9 ¶ 0076 - 0078, Pg. 13 ¶ 0104 and 0108 - 0109, Pg. 14 ¶ 0116 - 0117, Pg. 15 ¶ 0125 and 0127) wherein the scanning profile is associated with the one or more software containers; (D’Costa et al., Figs. 1A - 2B, 2E & 4A - 7, Pg. 3 ¶ 0017, Pg. 5 ¶ 0048 - Pg. 6 ¶ 0052, Pg. 6 ¶ 0054 - 0056, Pg. 8 ¶ 0070 - 0073, Pg. 9 ¶ 0076 - 0078, Pg. 13 ¶ 0104 and 0108 - 0109, Pg. 14 ¶ 0116 - 0117, Pg. 15 ¶ 0125 and 0127) configure, using the APIs, an algorithm pipeline for processing the image of the slide; (D’Costa et al., Figs. 2E, 5B & 5E - 7, Pg. 1 ¶ 0006 - 0007, Pg. 3 ¶ 0017, Pg. 6 ¶ 0054 - 0056, Pg. 7 ¶ 0063, Pg. 8 ¶ 0068 - 0073, Pg. 9 ¶ 0077 - 0080, Pg. 10 ¶ 0082 - 0084, Pg. 11 ¶ 0096, Pg. 12 ¶ 0099 - 0101, Pg. 13 ¶ 0106 - 0109, Pg. 14 ¶ 0114 - 0118, Pg. 15 ¶ 0125 and 0127) and configure, using the APIs, an algorithm pipeline for processing a high magnification image of the slide. (D’Costa et al., Figs. 2E, 5B & 5E - 7, Pg. 1 ¶ 0006 - 0007, Pg. 3 ¶ 0017, Pg. 6 ¶ 0054 - 0056, Pg. 7 ¶ 0063, Pg. 8 ¶ 0068 - 0073, Pg. 9 ¶ 0077 - 0080, Pg. 10 ¶ 0082 - 0084, Pg. 11 ¶ 0096, Pg. 12 ¶ 0099 - 0101, Pg. 13 ¶ 0106 - 0109, Pg. 14 ¶ 0114 - 0118, Pg. 15 ¶ 0125 and 0127) D’Costa et al. fail to disclose expressly capturing and processing a macro image of the slide, i.e., D’Costa et al. fail to disclose expressly a macro image. Pertaining to analogous art, Putman et al. disclose an apparatus and method for adaptive slide imaging using a selected scanning profile, (Putman et al., Figs. 12, 14 & 15A - 17, Pg. 1 ¶ 0005 - 0006 and 0013, Pg. 3 ¶ 0059 - 0060, Pg. 6 ¶ 0079 and 0082, Pg. 8 ¶ 0094, Pg. 10 ¶ 0120 - 0121, Pg. 11 ¶ 0129 - 0130, Pg. 12 ¶ 0132 - 0136, Pg. 14 ¶ 0151 - 0152) the apparatus comprising: a scanner configured to capture a macro image of a slide, (Putman et al., Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0059 - 0060, Pg. 5 ¶ 0071 - 0072, Pg. 6 ¶ 0079 and 0082, Pg. 8 ¶ 0094 - 0097, Pg. 9 ¶ 0099, Pg. 11 ¶ 0128 - Pg. 12 ¶ 0133 [“Specimens as understood by a person of ordinary skill in the art refer to an article of examination (e.g., a semiconductor wafer or a biological slide)”]) wherein the scanner comprises: a stage configured to hold the slide; (Putman et al., Pg. 1 ¶ 0005 - 0006, Pg. 2 ¶ 0016, Pg. 4 ¶ 0063, Pg. 5 ¶ 0069) wherein the stage is configured to move the slide relative to the optical system; (Putman et al., Abstract, Pg. 1 ¶ 0006, 0008 and 0010, Pg. 2 ¶ 0025, Pg. 5 ¶ 0069 - 0072, Pg. 6 ¶ 0079, Pg. 8 ¶ 0094, Pg. 9 ¶ 0099) at least a processor; (Putman et al., Fig. 16, Pg. 1 ¶ 0006, Pg. 6 ¶ 0079, Pg. 7 ¶ 0083 - 0086, Pg. 12 ¶ 0138 - Pg. 13 ¶ 0140, Pg. 13 ¶ 0142, Pg. 14 ¶ 0158 - 0160) and a memory, wherein the memory contains instructions configuring the at least a processor (Putman et al., Fig. 16, Pg. 1 ¶ 0006, Pg. 6 ¶ 0079, Pg. 7 ¶ 0083 - 0086, Pg. 12 ¶ 0138 - Pg. 13 ¶ 0140, Pg. 13 ¶ 0142, Pg. 14 ¶ 0158 - 0160) to: receive the macro image of the slide from the scanner; (Putman et al., Figs. 14 - 15B, Pg. 1 ¶ 0005 - 0006, Pg. 5 ¶ 0071 - 0072, Pg. 8 ¶ 0094 and 0097, Pg. 11 ¶ 0127 - Pg. 12 ¶ 0136, Pg. 13 ¶ 0143) determine a classification category of the slide; (Putman et al., Fig. 15B, Pg. 1 ¶ 0005 - 0006 and 0013, Pg. 2 ¶ 0021, Pg. 12 ¶ 0131 - 0136, Pg. 13 ¶ 0143 - 0147) retrieve a scanning profile as a function of the classification category of the slide; (Putman et al., Fig. 15B, Pg. 1 ¶ 0005 - 0006 and 0013, Pg. 2 ¶ 0021, Pg. 11 ¶ 0128 - Pg. 12 ¶ 0136, Pg. 13 ¶ 0143 - 0145) and image, using the optical system and optical sensor of the scanner, the slide as a function of the scanning profile; (Putman et al., Figs. 15A & 15B, Pg. 1 ¶ 0005 - 0006 and 0013, Pg. 5 ¶ 0071 - 0072, Pg. 6 ¶ 0079 and 0082, Pg. 8 ¶ 0097, Pg. 11 ¶ 0127 - Pg. 12 ¶ 0136, Pg. 13 ¶ 0143 - 0145) configure, using a set of application programming interfaces (APIs), download of one or more software containers for inline compute, (Putman et al., Figs. 2, 11 & 15A - 16, Pg. 1 ¶ 0005 and 0013, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0061, Pg. 6 ¶ 0079 - Pg. 7 ¶ 0083, Pg. 7 ¶ 0085 - 0086, Pg. 8 ¶ 0094, Pg. 10 ¶ 0121, Pg. 11 ¶ 0124 - 0126, Pg. 12 ¶ 0133 - Pg. 13 ¶ 0141, Pg. 13 ¶ 0143 - 0145, Pg. 14 ¶ 0152, 0157 and 0159 - 0160, Pg. 15 ¶ 0162 - 0163 and 0165) wherein the scanning profile is associated with the one or more software containers; (Putman et al., Figs. 15A & 15B, Pg. 1 ¶ 0013, Pg. 6 ¶ 0079 - 0082, Pg. 8 ¶ 0094, Pg. 10 ¶ 0120, Pg. 11 ¶ 0124 - 0125 and 0129, Pg. 12 ¶ 0133 - Pg. 13 ¶ 0140, Pg. 13 ¶ 0143 - 0145, Pg. 14 ¶ 0152 and 0159 - 0160, Pg. 15 ¶ 0162 - 0163 and 0165) configure, using the APIs, an algorithm pipeline for processing the macro image of the slide; (Putman et al., Figs. 2, 11 & 15A - 16, Pg. 1 ¶ 0005 and 0013, Pg. 5 ¶ 0071 - 0072, Pg. 6 ¶ 0079 - 0082, Pg. 8 ¶ 0094, Pg. 11 ¶ 0123 - 0126 and 0129, Pg. 12 ¶ 0131 - 0136, Pg. 13 ¶ 0140 and 0143 - 0147, Pg. 14 ¶ 0152) configure, using the APIs, an algorithm pipeline for processing a high magnification image of the slide. (Putman et al., Figs. 2, 11 & 15A - 16, Pg. 1 ¶ 0005 and 0013, Pg. 5 ¶ 0071 - 0072, Pg. 6 ¶ 0079 - 0082, Pg. 8 ¶ 0094, Pg. 11 ¶ 0123 - 0126 and 0129, Pg. 12 ¶ 0131 - 0136, Pg. 13 ¶ 0140 and 0143 - 0147, Pg. 14 ¶ 0152) D’Costa et al. and Putman et al. are combinable because they are both directed towards image processing systems that adaptively select imaging settings for imaging a biological sample disposed on a slide based on a classification of the slide. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of D’Costa et al. with the teachings of Putman et al. This modification would have been prompted in order to enhance the base device of D’Costa et al. with the well-known and applicable technique Putman et al. applied to a comparable device. Capturing and processing a macro image of the slide, as taught by Putman et al., would enhance the base device of D’Costa et al. by enabling an entirety of the slide to be imaged in a single scanning step thereby increasing the overall operational speed of the base device of D’Costa et al. and improving its ability to efficiently and reliably capture high-quality image data of slides. Furthermore, this modification would have been prompted by the teachings and suggestions of D’Costa et al. that a low resolution image of the slide may be initially captured and processed, see at least page 3 paragraph 0017, page 9 paragraphs 0077 and 0081, page 11 paragraph 0092, page 12 paragraph 0101 and page 13 paragraph 0105 of D’Costa et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a macro image of the slide would be captured and processed so as to enable an entirety of a slide to be imaged in a single step and thus improving the ability of the base device of D’Costa et al. to efficiently and reliably capture high-quality image data of slides. Therefore, it would have been obvious to combine D’Costa et al. with Putman et al. to obtain the invention as specified in claims 1 and 11. - With regards to claims 3 and 13, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 1 and 11, respectively, wherein determining the classification category of the slide as a function of the metadata comprises: identifying one or more fiducials on the macro image using an image processing algorithm; (D’Costa et al., Pg. 8 ¶ 0070 and 0073 - 0074, Pg. 10 ¶ 0082 - 0083, Pg. 11 ¶ 0092, Pg. 13 ¶ 0107 - 0108, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) and determining the classification category of the slide as a function of the one or more fiducials. (D’Costa et al., Pg. 8 ¶ 0073 - 0074, Pg. 11 ¶ 0092, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) - With regards to claims 6 and 16, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 1 and 11, respectively, wherein: extracting the metadata from the macro image of the slide comprises extracting textual data from a label of the slide using optical character recognition; (D’Costa et al., Pg. 8 ¶ 0069 - 0070 and 0072 - 0074, Pg. 11 ¶ 0092, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) and determining the classification category of the slide as a function of the metadata comprises determining the classification category of the slide as a function of the textual data. (D’Costa et al., Pg. 8 ¶ 0069 - 0070 and 0072 - 0074, Pg. 11 ¶ 0092, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) - With regards to claims 8 and 18, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 1 and 11, respectively, wherein the memory contains instructions further configuring the at least a processor to: configure, using the APIs, the scanner; (D’Costa et al., Figs. 1A - 2B, 2E & 4A - 7, Pg. 1 ¶ 0007, Pg. 3 ¶ 0017 and 0019 - 0020, Pg. 5 ¶ 0048 - Pg. 6 ¶ 0052, Pg. 6 ¶ 0054 - 0056, Pg. 7 ¶ 0063, Pg. 7 ¶ 0065 - Pg. 8 ¶ 0073, Pg. 9 ¶ 0076 - 0078, Pg. 10 ¶ 0082 - 0084, Pg. 11 ¶ 0096 - Pg. 12 ¶ 0097, Pg. 13 ¶ 0104 and 0108 - 0109, Pg. 14 ¶ 0116 - 0117, Pg. 15 ¶ 0125 and 0127) image the slide at a magnification using the optical system and optical sensor of the scanner; (D’Costa et al., Abstract, Figs. 1B - 2C, 6 & 7, Pg. 1 ¶ 0006, Pg. 2 ¶ 0012, Pg. 3 ¶ 0017, Pg. 5 ¶ 0050, Pg. 7 ¶ 0062 - 0064, Pg. 9 ¶ 0077 - 0078, Pg. 11 ¶ 0092, Pg. 12 ¶ 0101, Pg. 13 ¶ 0105) and process the macro image of the slide using the algorithm pipeline. (D’Costa et al., Figs. 4A - 5E, Pg. 1 ¶ 0007, Pg. 6 ¶ 0055 - 0056, Pg. 7 ¶ 0060, Pg. 9 ¶ 0080, Pg. 10 ¶ 0082 - 0083, Pg. 11 ¶ 0092, Pg. 12 ¶ 0102, Pg. 13 ¶ 0106 - 0108, Pg. 13 ¶ 0111 - Pg. 14 ¶ 0114, Pg. 15 ¶ 0120) D’Costa et al. fail to disclose explicitly image at a macro magnification. Pertaining to analogous art, Putman et al. disclose wherein the memory contains instructions further configuring the at least a processor to: configure, using the APIs, the scanner; (Putman et al., Figs. 2, 11 & 15A - 16, Pg. 1 ¶ 0005 and 0013, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0061, Pg. 6 ¶ 0079 - Pg. 7 ¶ 0083, Pg. 7 ¶ 0085 - 0086, Pg. 8 ¶ 0094, Pg. 10 ¶ 0121, Pg. 11 ¶ 0124 - 0126, Pg. 12 ¶ 0133 - Pg. 13 ¶ 0141, Pg. 13 ¶ 0143 - 0145, Pg. 14 ¶ 0152, 0157 and 0159 - 0160, Pg. 15 ¶ 0162 - 0163 and 0165) image the slide at a macro magnification using the optical system and optical sensor of the scanner; (Putman et al., Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0059 - 0060, Pg. 5 ¶ 0071 - 0072, Pg. 6 ¶ 0079 and 0082 [“Specimens as understood by a person of ordinary skill in the art refer to an article of examination (e.g., a semiconductor wafer or a biological slide)” and “Lens 34 can have different magnification powers, and/or be configured to operate with brightfield, darkfield or oblique illumination, polarized light, cross-polarized light, differential interference contrast (DIC), phase contrast and/or any other suitable form of illumination. The type of lens used for macro inspection system 100 can be based on desired characteristics, for example, field of view, numerical aperture, among others. In some embodiments, lens 34 can be a macro lens that can be used to view a specimen within a single field of view. Note, the term field of view as understood by a person of ordinary skill in the art refers to an area of examination that is captured at once by an image sensor”]) and process the macro image of the slide using the algorithm pipeline. (Putman et al., Pg. 1 ¶ 0005 and 0013, Pg. 5 ¶ 0071 - 0072, Pg. 6 ¶ 0079 - Pg. 7 ¶ 0083, Pg. 8 ¶ 0094, Pg. 11 ¶ 0123 - 0126 and 0129, Pg. 12 ¶ 0131 - 0136, Pg. 13 ¶ 0145 - 0147) - With regards to claims 10 and 20, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 1 and 11, respectively, wherein: the scanning profile comprises: a magnification parameter; (D’Costa et al., Figs. 4A - 5E, Pg. 1 ¶ 0007, Pg. 8 ¶ 0070 and 0073, Pg. 9 ¶ 0080, Pg. 10 ¶ 0082, Pg. 13 ¶ 0107 - 0110, Pg. 14 ¶ 0118, Pg. 15 ¶ 0120, Pg. 16 ¶ 0135) and a z-stack layer parameter; (D’Costa et al., Figs. 4A - 5E, Pg. 1 ¶ 0007, Pg. 8 ¶ 0070 and 0073, Pg. 9 ¶ 0080, Pg. 10 ¶ 0082, Pg. 13 ¶ 0107 - 0110, Pg. 14 ¶ 0118, Pg. 15 ¶ 0120, Pg. 16 ¶ 0135) and the memory contains instructions further configuring the at least a processor to image the slide using the scanner as a function of the magnification parameter and the z-stack layer parameter. (D’Costa et al., Abstract, Figs. 1B - 2B, 2E, 6 & 7, Pg. 3 ¶ 0017, Pg. 7 ¶ 0060 - 0064, Pg. 8 ¶ 0072 - 0074, Pg. 9 ¶ 0077 - 0078, Pg. 10 ¶ 0082 - 0083, Pg. 11 ¶ 0092, Pg. 13 ¶ 0108, Pg. 14 ¶ 0114 and 0116 - 0117, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) Claims 2, 9, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over D’Costa et al. U.S. Publication No. 2020/0400930 A1 in view of Putman et al. U.S. Publication No. 2021/0042884 A1 as applied to claims 1 and 11 above, and further in view of O’Dea U.S. Publication No. 2005/0131856 A1. - With regards to claims 2 and 12, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 1 and 11, respectively, wherein retrieving the scanning profile as a function of the classification category of the slide comprises: selecting the scanning profile from a plurality of scanning profiles as a function of the classification category. (D’Costa et al., Figs. 2A, 2B, 2E, 6 & 7, Pg. 3 ¶ 0017, Pg. 8 ¶ 0070 and 0073 - 0074, Pg. 9 ¶ 0077, Pg. 10 ¶ 0082 - 0083, Pg. 13 ¶ 0106 - 0108, Pg. 14 ¶ 0114 - 0117, Pg. 15 ¶ 0127, Pg. 16 ¶ 0137) D’Costa et al. fail to disclose explicitly selecting the scanning profile as a function of a plurality of selection weights corresponding the plurality of scanning profiles; incrementing a utilization datum of the selected scanning profile; and updating a selection weight of the selected scanning profile. Pertaining to analogous art, Putman et al. disclose wherein retrieving the scanning profile as a function of the classification category of the slide comprises: selecting the scanning profile from a plurality of scanning profiles as a function of the classification category. (Putman et al., Fig. 15B, Pg. 12 ¶ 0134 - 0137) Putman et al. fail to disclose explicitly selecting the scanning profile as a function of a plurality of selection weights corresponding the plurality of scanning profiles; incrementing a utilization datum of the selected scanning profile; and updating a selection weight of the selected scanning profile. Pertaining to analogous art, O’Dea discloses selecting the scanning profile from a plurality of scanning profiles as a function of a plurality of selection weights corresponding the plurality of scanning profiles; (O’Dea, Abstract, Fig. 7, Pg. 1 ¶ 0005 - 0007 and 0013, Pg. 1 ¶ 0015 - Pg. 2 ¶ 0016, Pg. 2 ¶ 0031, Pg. 3 ¶ 0040 - 0042, Pg. 4 ¶ 0045, 0048 ad 0051 - 0052, Pg. 5 ¶ 0054 - 0055 [“The settings configured by the user are recorded at the user interface 140. Counters for the settings are incremented based on usage and/or time, for example. When the user returns to use the ultrasound system again, the settings may be loaded based on previous usage patterns. The system ‘remembers’ the settings/functions/options that the user commonly uses when operating the system 100”]) incrementing a utilization datum of the selected scanning profile; (O’Dea, Fig. 7, Pg. 4 ¶ 0045 and 0048, Pg. 4 ¶ 0051 - Pg. 5 ¶ 0055) and updating a selection weight of the selected scanning profile. (O’Dea, Fig. 7, Pg. 4 ¶ 0045 and 0048, Pg. 4 ¶ 0051 - Pg. 5 ¶ 0055) D’Costa et al. in view of Putman et al. and O’Dea are combinable because they are all directed towards image processing systems that adaptively select imaging settings for capturing an image of a biological specimen. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of D’Costa et al. in view of Putman et al. with the teachings of O’Dea. This modification would have been prompted in order to enhance the combined base device of D’Costa et al. in view of Putman et al. with the well-known and applicable technique O’Dea applied to a similar device. Selecting the scanning profile as a function of a plurality of weights associated with the plurality of scanning profiles, incrementing a utilization datum of the selected scanning profile, and updating a selection weight of the selected scanning profile, as taught by O’Dea, would enhance the combined base device by improving its ability to quickly and reliably identify the scanning profile that is likely to be most suitable for capturing an image of the specimen that facilitates accurate analysis and examination of the specimen since the scanning profile that is most frequently utilized for imaging a specimen type of the specimen would be tracked, retrieved and utilized as the scanning profile for image capture of the specimen type of the specimen. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the scanning profile that is most frequently utilized for imaging a specimen type of the specimen would be selected, tracked and utilized as the scanning profile for image capture of the specimen so as to improve the ability of the combined base device to quickly and reliably identify the scanning profile that is likely to be the most suitable for capturing an image of the specimen that facilitates accurate analysis and examination of the specimen. Therefore, it would have been obvious to combine D’Costa et al. in view of Putman et al. with O’Dea to obtain the invention as specified in claims 2 and 12. - With regards to claims 9 and 19, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 1 and 11, respectively, wherein retrieving the scanning profile as a function of the classification category of the slide comprises: retrieving a plurality of scanning profiles; (D’Costa et al., Figs. 2A, 2B, 2E, 6 & 7, Pg. 3 ¶ 0017, Pg. 8 ¶ 0070 - 0073, Pg. 9 ¶ 0077, Pg. 10 ¶ 0082 - 0084, Pg. 11 ¶ 0096, Pg. 13 ¶ 0106 - 0108 and 0112, Pg. 14 ¶ 0114 - 0117, Pg. 15 ¶ 0120, 0125 and 0127, Pg. 16 ¶ 0129 and 0137) and selecting the scanning profile from the plurality of scanning profiles. (D’Costa et al., Figs. 2A, 2B, 2E, 6 & 7, Pg. 3 ¶ 0017, Pg. 8 ¶ 0070 - 0073, Pg. 9 ¶ 0077, Pg. 10 ¶ 0082 - 0084, Pg. 11 ¶ 0096, Pg. 13 ¶ 0106 - 0108 and 0112, Pg. 14 ¶ 0114 - 0117, Pg. 15 ¶ 0120, 0125 and 0127, Pg. 16 ¶ 0129 and 0137) D’Costa et al. fail to disclose explicitly retrieving a plurality of scanning profiles from a profile look up table; and selecting the scanning profile as a function of a plurality of weights associated with the plurality of scanning profiles. Pertaining to analogous art, Putman et al. disclose wherein retrieving the scanning profile as a function of the classification category of the slide comprises: retrieving a plurality of scanning profiles from a profile look up table. (Putman et al., Pg. 10 ¶ 0120 - 0121, Pg. 11 ¶ 0124 and 0129 - 0130, Pg. 12 ¶ 0133 and 0136, Pg. 13 ¶ 0140 and 0143 - 0147, Pg. 14 ¶ 0151 - 0152) Putman et al. fail to disclose explicitly selecting the scanning profile as a function of a plurality of weights associated with the plurality of scanning profiles. Pertaining to analogous art, O’Dea discloses selecting the scanning profile from the plurality of scanning profiles as a function of a plurality of weights associated with the plurality of scanning profiles. (O’Dea, Abstract, Fig. 7, Pg. 1 ¶ 0005 - 0007 and 0013, Pg. 1 ¶ 0015 - Pg. 2 ¶ 0016, Pg. 2 ¶ 0031, Pg. 3 ¶ 0040 - 0042, Pg. 4 ¶ 0045, 0048 ad 0051 - 0052, Pg. 5 ¶ 0054 - 0055 [“The settings configured by the user are recorded at the user interface 140. Counters for the settings are incremented based on usage and/or time, for example. When the user returns to use the ultrasound system again, the settings may be loaded based on previous usage patterns. The system ‘remembers’ the settings/functions/options that the user commonly uses when operating the system 100”]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of D’Costa et al. in view of Putman et al. with additional teachings of Putman et al. This modification would have been prompted in order to enhance the combined base device of D’Costa et al. in view of Putman et al. with the well-known and applicable technique Putman et al. applied to a comparable device. Retrieving a plurality of scanning profiles from a profile look up table, as taught by Putman et al., would enhance the combined base device by facilitating its ability to store scanning profiles for subsequent access and use and by allowing for scanning profiles suitable for imaging and analysis of various different specimens to be quickly and easily located, accessed and utilized since the plurality of scanning profiles would be stored in association with data indicating their suitability for imaging and examination of various different specimen types. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a plurality of scanning profiles would be stored in a profile look up table so as to facilitate the quick and efficient retrieval of a suitable scanning profile for imaging various slides. In addition, D’Costa et al. in view of Putman et al. and O’Dea are combinable because they are all directed towards image processing systems that adaptively select imaging settings for capturing an image of a biological specimen. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of D’Costa et al. in view of Putman et al. with the teachings of O’Dea. This modification would have been prompted in order to enhance the combined base device of D’Costa et al. in view of Putman et al. with the well-known and applicable technique O’Dea applied to a similar device. Selecting the scanning profile as a function of a plurality of weights associated with the plurality of scanning profiles, as taught by O’Dea, would enhance the combined base device by improving its ability to quickly and reliably identify the scanning profile that is most likely to be suitable for capturing an image of the specimen that facilitates accurate analysis and examination of the specimen since the scanning profile that is most frequently utilized for imaging a specimen type of the specimen would be retrieved and utilized as the scanning profile for image capture. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the scanning profile that is most frequently utilized for imaging a specimen type of the specimen would be selected and utilized as the scanning profile for image capture so as to improve the ability of the combined base device to quickly and reliably identify the scanning profile that is likely to be the most suitable for capturing an image of the specimen that facilitates accurate analysis and examination of the specimen. Therefore, it would have been obvious to combine D’Costa et al. in view of Putman et al. with additional teachings of Putman et al. and O’Dea to obtain the invention as specified in claims 9 and 19. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over D’Costa et al. U.S. Publication No. 2020/0400930 A1 in view of Putman et al. U.S. Publication No. 2021/0042884 A1 as applied to claims 3 and 13 above, and further in view of Gallagher-Gruber et al. U.S. Publication No. 2021/0090238 A1. - With regards to claims 4 and 14, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 3 and 13, respectively, wherein imaging the slide as a function of the scanning profile comprises: detecting a region of interest of the slide using the macro image, (D’Costa et al., Pg. 4 ¶ 0042, Pg. 10 ¶ 0082 - 0083, Pg. 12 ¶ 0102, Pg. 13 ¶ 0106 - 0108, Pg. 14 ¶ 0113 - 0114) wherein the region of interest encompasses one or more fiducials bounding a pathology sample; (D’Costa et al., Figs. 5A - 5E, Pg. 4 ¶ 0042 - 0043, Pg. 10 ¶ 0082 - 0083, Pg. 12 ¶ 0102, Pg. 13 ¶ 0106 - 0108, Pg. 14 ¶ 0113 - 0114) and imaging a high-magnification image of the region of interest using the optical system and optical sensor of the scanner. (D’Costa et al., Figs. 2A, 2B, 2D, 2E & 5A - 5E, Pg. 1 ¶ 0006 - 0007, Pg. 7 ¶ 0062 - 0064, Pg. 8 ¶ 0070 and 0072 - 0073, Pg. 9 ¶ 0079 - 0081, Pg. 10 ¶ 0083 - 0084, Pg. 12 ¶ 0101 - 0102, Pg. 13 ¶ 0105 - 0109, Pg. 14 ¶ 0113 - 0114, Pg. 15 ¶ 0119 - 0120 and 0128) D’Costa et al. fail to disclose explicitly training a region machine-learning model using region training data, wherein the region training data comprises slide images correlated to labeled regions; and detecting, using a region machine-learning model, a region of interest of the slide. Pertaining to analogous art, Gallagher-Gruber et al. disclose training a region machine-learning model using region training data, wherein the region training data comprises slide images correlated to labeled regions; (Gallagher-Gruber et al., Figs. 12A - 12C, 14A & 14B, Pg. 3 ¶ 0027 - 0032, Pg. 15 ¶ 0171 - Pg. 16 ¶ 0175, Pg. 17 ¶ 0182 - Pg. 18 ¶ 0185, Pg. 19 ¶ 0196 - Pg. 20 ¶ 0198) and detecting, using a region machine-learning model, a region of interest of the slide using the image, wherein the region of interest encompasses one or more fiducials bounding a sample. (Gallagher-Gruber et al., Figs. 8, 12A - 12C, 14A & 14B, Pg. 3 ¶ 0027 - 0032 and 0035, Pg. 6 ¶ 0117, Pg. 12 ¶ 0150 - 0152, Pg. 15 ¶ 0171 - Pg. 16 ¶ 0175, Pg. 17 ¶ 0182 - Pg. 18 ¶ 0185, Pg. 20 ¶ 0198) D’Costa et al. in view of Putman et al. and Gallagher-Gruber et al. are combinable because they are all directed towards image processing systems that adaptively select imaging settings for imaging a specimen sample. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of D’Costa et al. in view of Putman et al. with the teachings of Gallagher-Gruber et al. This modification would have been prompted in order to substitute the region of interest detection method of D’Costa et al. with the region of interest detection technique of Gallagher-Gruber et al. The region of interest detection technique of Gallagher-Gruber et al. could be substituted in place of the region of interest detection method of D’Costa et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the trained machine-learning model of Gallagher-Gruber et al. would be utilized to detect the region of interest. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a trained machine-learning model would be utilized to detect the regions/areas of interest of the slide. Therefore, it would have been obvious to combine D’Costa et al. in view of Putman et al. with Gallagher-Gruber et al. to obtain the invention as specified in claims 4 and 14. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over D’Costa et al. U.S. Publication No. 2020/0400930 A1 in view of Putman et al. U.S. Publication No. 2021/0042884 A1 as applied to claims 1 and 11 above, and further in view of Yip et al. U.S. Publication No. 2021/0166381 A1. - With regards to claims 5 and 15, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 1 and 11, respectively, wherein: extracting the metadata from the macro image of the slide comprises determining a circularity of a pathology sample of the slide using an image processing algorithm. (D’Costa et al., Abstract, Pg. 4 ¶ 0042 - 0043, Pg. 5 ¶ 0046, Pg. 10 ¶ 0083, Pg. 12 ¶ 0102, Pg. 13 ¶ 0106 - 0108, Pg. 14 ¶ 0113 [“detecting AOIs that are round or circular in nature.”]) D’Costa et al. fail to disclose explicitly wherein: determining the classification category of the slide as a function of the metadata comprises determining the classification category of the slide as a function of the circularity of the pathology sample. Pertaining to analogous art, Yip et al. disclose determining a circularity of a pathology sample of the slide using an image processing algorithm; (Yip et al., Pg. 6 ¶ 0088 - 0090, Pg. 14 ¶ 0171 - 0172, Pg. 16 ¶ 0190 - 0191, Pg. 19 ¶ 0219, Pg. 24 ¶ 0262, Pg. 27 ¶ 0297, Pg. 30 ¶ 0345 - Pg. 31 ¶ 0351) and determining the classification category of the slide as a function of the metadata comprises determining the classification category of the slide as a function of the circularity of the pathology sample. (Yip et al., Pg. 2 ¶ 0014, Pg. 3 ¶ 0017, Pg. 5 ¶ 0085, Pg. 6 ¶ 0088 - 0090, Pg. 14 ¶ 0171 - 0172, Pg. 19 ¶ 0219, Pg. 24 ¶ 0262, Pg. 27 ¶ 0297, Pg. 30 ¶ 0345 - Pg. 31 ¶ 0351 [“architecture 1200 recognizes various pixel data patterns in the portion of the digital image that is located within or near each small square tile and assigns a tissue class label to each small square tile based on those detected pixel data patterns”, “The pixel data patterns that the algorithm detects may represent visually detectable features. Some examples of those visually detectable features may include color, texture, cell size, shape, and spatial organization” and “The shape of individual cells, specifically how circular they are, can indicate what type of cell they are. Fibroblasts (stromal cells) are normally elongated and slim, while lymphocytes are very round. Tumor cells can be more irregularly shaped”]) D’Costa et al. in view of Putman et al. and Yip et al. are combinable because they are all directed towards image processing systems that capture and analyze images of biological specimens. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of D’Costa et al. in view of Putman et al. with the teachings of Yip et al. This modification would have been prompted in order to substitute the slide classification process of D’Costa et al. with the slide classification technique of Yip et al. The slide classification technique of Yip et al. could be substituted in place of the slide classification process of D’Costa et al. utilizing well-known techniques in the art and would likely yield predictable results, in that a classification category of the slide would be determined as a function of the circularity of the pathology sample. Furthermore, this modification would have been prompted by the teachings and suggestions of Putman et al. that computer vision and/or artificial intelligence techniques can be utilized to perform classification of specimens, see at least page 12 paragraph 0135 and page 13 paragraphs 0145 - 0149 of Putman et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a classification category of the slide would be determined as a function of the circularity of the pathology sample. Therefore, it would have been obvious to combine D’Costa et al. in view of Putman et al. with Yip et al. to obtain the invention as specified in claims 5 and 15. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over D’Costa et al. U.S. Publication No. 2020/0400930 A1 in view of Putman et al. U.S. Publication No. 2021/0042884 A1 as applied to claims 6 and 16 above, and further in view of Linhart et al. U.S. Publication No. 2023/0068571 A1. - With regards to claims 7 and 17, D’Costa et al. in view of Putman et al. disclose the apparatus and method of claims 6 and 16, respectively, wherein determining the classification category of the slide as a function of the textual data comprises: extracting one or more keywords from the textual data; (D’Costa et al., Figs. 1B & 5A - 5E, Pg. 8 ¶ 0069 - 0070 and 0073 - 0074, Pg. 10 ¶ 0085, Pg. 11 ¶ 0092, Pg. 13 ¶ 0111, Pg. 15 ¶ 0126 - 0127, Pg. 16 ¶ 0136 - 0137) and determining the classification category of the slide as a function of the one or more keywords. (D’Costa et al., Figs. 1B & 5A - 5E, Pg. 8 ¶ 0069 - 0070 and 0073 - 0074, Pg. 10 ¶ 0085, Pg. 11 ¶ 0092, Pg. 13 ¶ 0111, Pg. 15 ¶ 0126 - 0127, Pg. 16 ¶ 0136 - 0137) D’Costa et al. fail to disclose explicitly using a natural language processing algorithm. Pertaining to analogous art, Linhart et al. disclose extracting one or more keywords from the textual data using a natural language processing algorithm. (Linhart et al., Pg. 12 ¶ 0168 - 0169, Pg. 12 ¶ 0172 - Pg. 13 ¶ 0176 [“in a condition that the LIS information includes a textual data element describing a physician's report of a mammography scan, and the report includes indications of suspected lesions and/or areas of calcification, QC 140 may be adapted to extract words of interest (e.g., mammogram, calcification, lesion, BRCA (Breast Cancer gene), etc.) from the physician's report... Additionally, or alternatively, QC 140 may include or may employ at least one sub-module, such as a natural language processing (NLP) module 141, adapted to extract the words of interest.”]) D’Costa et al. in view of Putman et al. and Linhart et al. are combinable because they are all directed towards image processing systems that capture and analyze images of biological specimens. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of D’Costa et al. in view of Putman et al. with the teachings of Linhart et al. This modification would have been prompted in order to enhance the combined base device of D’Costa et al. in view of Putman et al. with the well-known and applicable technique Linhart et al. applied to a similar device. Utilizing a natural language processing algorithm to extract one or more keywords from textual data, as taught by Linhart et al., would enhance the combined base device by facilitating its ability to accurately, reliably and efficiently identify pertinent text of interest, keywords, from the textual data that relate to the classification category of the slide thereby helping ensure robust and dependable category classification of slides based on textual data extracted from slide labels by the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a natural language processing algorithm would be utilized to extract the one or more keywords from the textual data so as to enable the combined base device to accurately, reliably and efficiently identify pertinent text of interest, keywords, from the textual data that relate to the classification category of the slide in order to help ensure robust and dependable category classification of slides based on textual data extracted from slide labels by the combined base device. Therefore, it would have been obvious to combine D’Costa et al. in view of Putman et al. with Linhart et al. to obtain the invention as specified in claims 7 and 17. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC RUSH/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jul 16, 2024
Application Filed
Oct 29, 2024
Non-Final Rejection — §103
Dec 17, 2024
Examiner Interview Summary
Dec 17, 2024
Applicant Interview (Telephonic)
Jan 02, 2025
Response Filed
Feb 07, 2025
Final Rejection — §103
Jul 14, 2025
Request for Continued Examination
Jul 16, 2025
Response after Non-Final Action
Sep 10, 2025
Non-Final Rejection — §103
Oct 08, 2025
Interview Requested
Oct 22, 2025
Examiner Interview Summary
Nov 29, 2025
Response Filed
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month