Prosecution Insights
Last updated: April 19, 2026
Application No. 17/931,271

ADAPTIVE SENSING BASED ON DEPTH

Final Rejection §103§DP
Filed
Sep 12, 2022
Examiner
HAUSMANN, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Scopio Labs Ltd.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
658 granted / 863 resolved
+14.2% vs TC avg
Strong +22% interview lift
Without
With
+21.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
14.6%
-25.4% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 863 resolved cases

Office Action

§103 §DP
DETAILED ACTION Response to Amendment Claims 1-20 are pending. Claims 1-20 are amended directly or by dependency on an amended claim. Response to Arguments Applicant’s arguments, see pages 6-8 filed 28 August, 2025 with respect to the 35 USC 103 rejections of claim(s) 1-20 along with accompanying amendments received on the same date have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. It is noted that while the claim language is an improvement, several of the options in the list of 3D processes are still broad and easily found. Examiner recommends removing some of those broader options from that list to overcome cited references and improve conditions for allowance. The double patenting rejections are not being argued at this time pending the arrival at otherwise allowable subject matter. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1 (and by dependency, claims 2-20) are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11482021 in view of Perz (IDS: US 20100265323 A1) in view of Tsujimoto (US 20130016885 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because the current claims are a broader version of those allowed in U.S. Patent No. 11482021. The new limitation of “an objective lens configured to collect light from a sample illuminated by the illumination assembly and generate one or more images of the sample along one or more focal planes on an image sensor array, each of the one or more images from one of the one or more focal planes” is taught by Perz: an objective lens configured to collect light from a sample illuminated by the illumination assembly (“Motors 124 can be conventional servomotors associated with the motion control of microscope 110, such as for rotating the appropriately powered lens within the optical path of microscope 110, for adjusting focus, or for controlling an automated microscope stage (not shown). Light source 126 can be any suitable light source for appropriately illuminating the FOV of microscope 110, such that the creation of a digital image of that FOV is possible. Turret 128 can be a conventional motor-driven microscope turret, upon which is mounted a set of lenses of varying power that may be rotated into the optical path of microscope 110,” [0034], The microscope slide 220 contains a specimen to be viewed, such as a sample 230. Sample 230 is representative of any target specimen, such as a tissue sample resulting from a needle biopsy, [0037], Microscope imaging system 100 can take a digital image of the entire microscope slide 220 with a low-magnification microscope objective 270 that has a large depth of view (DOV), [0040]) and generate one or more images of the sample along one or more focal planes on an image sensor array, each of the one or more images from one of the one or more focal planes (capturing a low-magnification image, setting the initial focal plane, capturing the image, [0014], “At 330, an initial focal plane can be set. For the focus point moved to at 320, the distance between objective 220 and sample 230 is varied, relative to each other, in order to adjust the position of focal plane 290 to an initial specified location along Z axis 280. At 335, an image is captured”, [0048]). The new limitation of perform, in response to identifying the attribute, a plurality of three-dimensional (3D) processes, the plurality of 3D processes performed with the processor comprising at least two of the following: processing at least on the initial image set and determining a plurality of focal planes for capturing the one or more subsequent images based at least on the attribute, capturing a plurality of images at a respective plurality of focal planes is taught by Tsujimoto: (imaging a specimen including a structure in various focal positions using a microscope apparatus, image processing, generation unit selects a part of the original images having focal positions included within a smaller depth range than a thickness of the specimen from the plurality of original images obtained from the specimen, generates the first image using the selected original images, [0011], Here, processing is performed to extract regions having a red to pink color gamut, using the fact that the cell is stained red to pink by the eosin. In the analysis image according to this embodiment, image blur is reduced by the focus stacking, and therefore edge extraction and subsequent contour extraction can be performed with a high degree of precision, [0108]) (A direction and amount of movement, and so on of the stage 202 are determined based on position information and thickness information (distance information) on the specimen obtained by measurement by the pre-measurement unit 217 and a instruction from the user, [0036], “The pre-measurement unit 217 is a unit for performing pre-measurement as preparation for calculation of position information of the specimen on the slide 206, information on distance to a desired focal position, and a parameter for adjusting the amount of light attributable to the thickness of the specimen. Acquisition of information by the pre-measurement unit 217 before main measurement makes it possible to perform efficient imaging. Further, designation of positions in which to start and terminate imaging (a focal position range) and an imaging interval (an interval between focal positions; also referred to as a Z interval) when obtaining images having different focal positions is also performed on the basis of the information generated by the pre-measurement unit 217”, [0046], In Step S702, the main control system 218 designates a Z direction imaging range on the basis of the Z direction distance information and thickness information of the slide, obtained in the pre-measurement. More specifically, an imaging start position (the cover glass lower surface, for example), an imaging end position (the slide glass upper surface, for example), and the imaging interval (the Z interval) are preferably designated, [0068]) [attribute = thickness, cell stain color]. U.S. Patent No. 11482021 1. A microscope comprising: an illumination assembly; an image capture device configured to collect light from a sample illuminated by the illumination assembly; and a processor configured to execute instructions which cause the microscope to: capture, using the image capture device, an initial image set of the sample; identify, by the processor in response to the initial image set, an attribute of the sample by performing analysis on the initial image set independently from a user, wherein the attribute is indicative of the sample extending beyond a single focal plane; determine, in response to identifying the attribute, a three-dimensional (3D) process by determining, using the attribute, that the sample extends beyond the single focal plane and selecting the 3D process, from a plurality of 3D processes, capable of capturing the sample with more than one focal plane; and generate, using the determined 3D process, an output image set comprising the more than one focal plane. Current Application 1. A microscope comprising: an illumination assembly; an image capture device comprising an objective lens configured to collect light from a sample illuminated by the illumination assembly and generate one or more images of the sample along one or more focal planes on an image sensor array, each of the one or more images from one of the one or more focal planes; and a processor configured to execute instructions which cause the microscope to: capture, using the image capture device, an initial image set of the sample comprising the one or more images along the one or more focal planes; identify, by the processor in response to the initial image set, an attribute of the sample related to a three dimensional (3D) structure of the sample, by performing analysis on the initial image set independently from a user; perform, in response to identifying the attribute, a plurality of three-dimensional (3D) processes; the plurality of 3D processes performed with the processor comprising at least two of the following, processing at least on the initial image set, capturing one or more subsequent images of the sample using one or more illumination conditions of the illumination assembly and one or more image capture settings for image capture, determining a plurality of focal planes for capturing the one or more subsequent images based at least on the attribute, capturing a plurality of images at a respective plurality of focal planes, performing a 3D reconstruction of the sample based at least on a subset of images captured by the image capture device, performing a 2.5D reconstruction of the sample based at least on a subset of images captured by the image capture device in order to generate 3D data from the sample, performing focus stacking for the sample based at least on a subset of images captured by the image capture device, capturing one or more subsequent images of the sample using a plurality of illumination conditions, wherein the plurality of illumination conditions comprise at least one of an illumination angle, an illumination wavelength, and an illumination pattern, or capturing a second plurality of images, wherein a number of images in the second plurality of images is greater than a number of images in the initial image set; and generate, using the determined 3D process, an output image set comprising more than one focal plane. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4-6, 8-10, 13, 14, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Perz (IDS: US 20100265323 A1) in view of Brown et al. (US 20150279012 A1) in view of Tsujimoto (US 20130016885 A1). Regarding claim 1, Perz discloses a microscope (abstract) comprising: an illumination assembly (Light source 126 can be any suitable light source for appropriately illuminating the FOV of microscope 110, [0034]); an image capture device comprising an objective lens configured to collect light from a sample illuminated by the illumination assembly (“Motors 124 can be conventional servomotors associated with the motion control of microscope 110, such as for rotating the appropriately powered lens within the optical path of microscope 110, for adjusting focus, or for controlling an automated microscope stage (not shown). Light source 126 can be any suitable light source for appropriately illuminating the FOV of microscope 110, such that the creation of a digital image of that FOV is possible. Turret 128 can be a conventional motor-driven microscope turret, upon which is mounted a set of lenses of varying power that may be rotated into the optical path of microscope 110.”, [0034], The microscope slide 220 contains a specimen to be viewed, such as a sample 230. Sample 230 is representative of any target specimen, such as a tissue sample resulting from a needle biopsy, [0037], Microscope imaging system 100 can take a digital image of the entire microscope slide 220 with a low-magnification microscope objective 270 that has a large depth of view (DOV), [0040]) and generate one or more images of the sample along one or more focal planes on an image sensor array, each of the one or more images from one of the one or more focal planes (capturing a low-magnification image, setting the initial focal plane, capturing the image, [0014], “At 330, an initial focal plane can be set. For the focus point moved to at 320, the distance between objective 220 and sample 230 is varied, relative to each other, in order to adjust the position of focal plane 290 to an initial specified location along Z axis 280. At 335, an image is captured”, [0048]); and a processor configured to execute instructions which cause the microscope to: capture, using the image capture device, an initial image set of the sample comprising the one or more images along the one or more focal planes (capturing a low-magnification image, setting the initial focal plane, capturing the image, [0014], “At 330, an initial focal plane can be set. For the focus point moved to at 320, the distance between objective 220 and sample 230 is varied, relative to each other, in order to adjust the position of focal plane 290 to an initial specified location along Z axis 280. At 335, an image is captured”, [0048]); identify, by the processor in response to the initial image set, an attribute of the sample related to a three dimensional (3D) structure of the sample, by performing analysis on the initial image set independently from a user (“Choosing focus points on a specimen can include performing a silhouette scan, determining the number of specimen pieces, eliminating locations near cover slip edges, determining the number of focus points to assign, collecting a list of candidate focus points, ranking the candidate list, selecting the first focus point, trimming the candidate list, creating a distance array, selecting the next focus point, determining if the points are collinear, jittering the focus points, determining if there are a sufficient number of focus points (e.g., at least four), and selecting any remaining focus points”, [0014], analyzing an image of at least a portion of a scan region to find an area in the image representing a sample, determining a nature of the sample at a selected focus point location that falls in the area in the image, [0019], nature of the sample, the stain or die used, or even the portion of the sample that falls within the microscope's field of view, [0021], focus points likely to be found in a given test of a biological specimen, type of tissue and its preparation, such as the nature of any stain(s) used, [0043], the type of specimen being analyzed, the type of slide, [0047]); perform, in response to identifying the attribute, a three-dimensional (3D) process (The method of performing the focusing operation can include setting operating parameters, capturing a low-magnification image, choosing focus point locations, moving to a selected focus point, analyzing the image to best determine a focus technique, determining the Z position search pattern, setting the initial focal plane, capturing the image, storing the image, calculating focus power, determining whether additional images at different focal planes are needed, determining whether to move to another focus point, selecting peak focus power, censoring focus points, and fitting a focal plane, [0014] selecting an automated focusing process for use at the selected focus point location, from among multiple automated focusing processes, based on the determined nature of the sample at the selected focus point location,[0019], “Depending on the nature of the sample, the stain or die used, or even the portion of the sample that falls within the microscope's field of view, different focus techniques may better determine the optimal focal plane(s). The present systems and techniques can determine which focus technique is best suited for each position on the X,Y plane on which the microscope is focused. Thus, different automated focusing processes (including commonly known focus techniques) can be used at different X,Y locations depending on the nature of the sample at each location, and multiple {X,Y,Z} coordinates obtained using the different automated focusing processes can be used to form a focal surface (e.g., a focal plane) to govern focusing at other X-Y locations on the sample.” [0021], At 325, an image can be analyzed to best determine the focus technique for a selected focus point. Microscope imaging system 100 can evaluate the image that contains the selected focus point (or acquire and evaluate a higher power image of the focus point at a best guess focus) and determine which focus technique to use to best determine the optimal focal plane 290 for the selected focus point. , [0042], The system 100 can select from among the specified focus techniques based on the nature of the sample 230 at various selected X-Y focus points., [0043], Microscope imaging system 100 can determine the initial position on Z axis 280 for focal plane 290, as well as subsequent positions along Z axis 280 for focal plane 290 above and below the initial position, from which to capture digital images. This determination can be based on a selected focus technique, the type of specimen being analyzed, the type of slide, other test related factors, or a combination of these., [0047]); and generate, using the determined 3D process, an output image set comprising more than one focal plane (outputting a signal to cause focusing of the microscope using the selected automated focusing process, [0019], capturing and storing image, see Fig. 3 PNG media_image1.png 656 488 media_image1.png Greyscale ). Perz does not explicitly disclose an image sensor array. Perz does not explicitly disclose perform, in response to identifying the attribute, a plurality of three- dimensional (3D) processes, the plurality of 3D processes performed with the processor comprising at least two of the following, processing at least on the initial image set, capturing one or more subsequent images of the sample using one or more illumination conditions of the illumination assembly and one or more image capture settings for image capture,determining a plurality of focal planes for capturing the one or more subsequent images based at least on the attribute,capturing a plurality of images at a respective plurality of focal planes,performing a 3D reconstruction of the sample based at least on a subset of images captured by the image capture device,performing a 2.5D reconstruction of the sample based at least on a subset of images captured by the image capture device in order to generate 3D data from the sample,performing focus stacking for the sample based at least on a subset of images captured by the image capture device,capturing one or more subsequent images of the sample using a plurality of illumination conditions, wherein the plurality of illumination conditions comprise at least one of an illumination angle, an illumination wavelength, and an illumination pattern, or capturing a second plurality of images, wherein a number of images in the second plurality of images is greater than a number of images in the initial image set. Brown et al. teach an illumination assembly (may use ultrasonic waves or triangulation of infrared light, [0028], imaging device 10 contains a beam splitter (not shown) that captures light from opposite sides of the lens and diverts light to autofocus sensors located separately from image sensor 50. This generates two separate images which are compared for light intensity, [0029]); an image capture device comprising an objective lens configured to collect light from a sample illuminated by the illumination assembly (image data includes data required to calculate depth of field such as aperture diameter and focal length of optical lens 20, [0030], initially, light passes through lens 20, [0046]) and generate one or more images of the sample along one or more focal planes on an image sensor array (In one embodiment, image sensor 50 may be a charge-coupled device (CCD) sensor. In another embodiment, image sensor 50 may be a complementary metal-oxide semiconductor (CMOS) sensor or another type of sensor. In yet another embodiment, image sensor 50 could be a specialized sensor for medical imaging, [0024], In one embodiment, light passes through optical lens 20 and reaches image sensor 50, which contains an array of pixel sensors that are evenly distributed across image sensor 50. A pixel sensor may be comprised of a semiconductor material that absorbs light photons and generates electronic signals. In one embodiment, image sensor 50 may also contain autofocus pixel sensors. The autofocus pixel sensors may be an array of pixel sensors that are arranged in various patterns. In another embodiment, the autofocus pixel sensors may be contained on a sensor that is separate from image sensor 50, [0025]), each of the one or more images from one of the one or more focal planes (multiple source images captured at different focus distances, [0003], “Depth data includes the distance between each focus point of an image and image capturing device 10 at the time that the image was taken,” [0034], FIG. 2B depicts a display of UI 212 when depth of field program 70 is operating on image capturing device 10. UI 212 displays the first image received from image capture program 60. The image shown in UI 212 is the same image shown in UI 200 of FIG. 2A. Depth of field program 70 determines that focus points 213 (as indicated by a first mask) are outside the focal plane and are therefore out of focus. Depth of field program 70 determines that focus points 213 each represent subject matter that is in the background of the captured image, [0038], Focus stacking program 80 determines that focus points 220 and focus points 222 (as indicated by a second mask) each represent subject matter in the focal plane and are therefore in focus, [0039]); and a processor configured to execute instructions which cause the microscope to: capture, using the image capture device, an initial image set of the sample comprising the one or more images along the one or more focal planes (multiple source images captured at different focus distances, [0003], Depth data includes the distance between each focus point of an image and image capturing device 10 at the time that the image was taken, [0034], FIG. 2B depicts a display of UI 212 when depth of field program 70 is operating on image capturing device 10. UI 212 displays the first image received from image capture program 60. The image shown in UI 212 is the same image shown in UI 200 of FIG. 2A. Depth of field program 70 determines that focus points 213 (as indicated by a first mask) are outside the focal plane and are therefore out of focus. Depth of field program 70 determines that focus points 213 each represent subject matter that is in the background of the captured image, [0038], Focus stacking program 80 determines that focus points 220 and focus points 222 (as indicated by a second mask) each represent subject matter in the focal plane and are therefore in focus, [0039]); identify, by the processor in response to the initial image set, an attribute of the sample related to a three dimensional (3D) structure of the sample, by performing analysis on the initial image set independently from a user (focus stacking program 80 receives image data, including a depth map, for the captured image from depth of field program 70. Focus stacking program 80 uses the depth map to determine areas of the captured image that are already in focus, [0033], “In another embodiment, focus stacking program 80 automatically determines the foremost part of subject 210. For example, focus stacking program 80 determines that focus points 224 represent the foremost part of subject 210 based on distance values determined from the received depth map”, [0039]); determine, in response to identifying the attribute, a three-dimensional (3D) process (Focus stacking program 80 causes the determined number of images necessary for focus stacking to be captured. Focus stacking program 80 receives image data, including a depth map, for each captured image. Focus stacking program 80 determines depth data for each captured image from each respective depth map. Depth data includes the distance between each focus point of an image and image capturing device 10 at the time that the image was taken. Focus stacking program 80 compares the depth data for each captured image to the depth data for the set of captured images. Focus stacking program 80 determines focus points that are in focus in each captured image. Focus stacking program 80 selects focus points that are in focus in each captured image. Focus stacking program 80 creates a final image that includes selected in-focus focus points of each captured image, [0034]) [3D process = number of required images for focus stacking]; and generate, using the determined 3D process, an output image set comprising more than one focal plane (Focus stacking program 80 creates a final image that includes selected in-focus focus points of each captured image, [0034] The final image is a combination of the first image displayed in UI 200 of FIG. 2A and the second image displayed in UI 230 of FIG. 2D. The final image includes subject 210. The final image includes focus points 220 and focus points 222 of FIG. 2C and focus points 235 of FIG. 2F. The subject matter of the final image is in focus. In one embodiment, if the user has not selected focus points that are in focus for every section of the final image, focus stacking program 80 selects focus points from the set of captured images that are as close to being in the depth of field as possible, [0044]). Perz and Brown et al. are in the same art of automatic focusing systems (Perz, abstract; Brown et al., abstract). The combination of Brown et al. with Perz enables use of a sensor array. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the array of Brown et al. with the invention of Perz as this was known at the time of filing, the combination would have predictable results, and as Brown et al. indicate, “Focus stacking enables a user to blend in-focus regions of subject matter depicted in a series of images, taken at varying focus distances, to create a final composite image that includes the in-focus regions of multiple images in the series of images. Focus stacking can be a time consuming post-processing technique that requires the subject to be motionless while a series of images are captured. Traditionally, focus stacking requires the use of focus rails or incremental manual focus, making focus stacking a technically challenging process. Embodiments of the present invention automate focus stacking image capture and post-processing” ([0016]) indicating the combination will yield an in focus image without requiring user intervention, which will result in the benefits of improved image quality and user ease. Perz and Brown et al. do not explicitly disclose perform, in response to identifying the attribute, a plurality of three- dimensional (3D) processes, the plurality of 3D processes performed with the processor comprising at least two of the following, processing at least on the initial image set, capturing one or more subsequent images of the sample using one or more illumination conditions of the illumination assembly and one or more image capture settings for image capture,determining a plurality of focal planes for capturing the one or more subsequent images based at least on the attribute,capturing a plurality of images at a respective plurality of focal planes,performing a 3D reconstruction of the sample based at least on a subset of images captured by the image capture device,performing a 2.5D reconstruction of the sample based at least on a subset of images captured by the image capture device in order to generate 3D data from the sample,performing focus stacking for the sample based at least on a subset of images captured by the image capture device,capturing one or more subsequent images of the sample using a plurality of illumination conditions, wherein the plurality of illumination conditions comprise at least one of an illumination angle, an illumination wavelength, and an illumination pattern, or capturing a second plurality of images, wherein a number of images in the second plurality of images is greater than a number of images in the initial image set. Tsujimoto teaches a microscope (abstract, [0011]) comprising: an illumination assembly ([0035]); an image capture device comprising an objective lens configured to collect light from a sample illuminated by the illumination assembly and generate one or more images of the sample along one or more focal planes on an image sensor array, each of the one or more images from one of the one or more focal planes (The image-formation optical system 207 is a lens group for forming an optical image of the specimen in the preparation 206 on an imaging sensor 208, [0037], images at different focal planes, FIG. 3 is a conceptual diagram of focus stacking. The focus stacking processing will be schematically described with reference to FIG. 3. Images 501 to 507 are seven-layer images which are obtained by imaging seven times an object including a plurality of structures at three-dimensionally different spatial positions while sequentially changing the focal position in the optical axis direction (Z direction), [0052]); and a processor configured to execute instructions which cause the microscope to: capture, using the image capture device, an initial image set of the sample comprising the one or more images along the one or more focal planes (“An image 517 is an image obtained by cutting out respective regions of the structures 510 to 516 which are in focus in the images 501 to 507 and merging these regions. By merging the focused regions of the plurality of images as described above, a focus-stacked image which is focused in the entirety of the image can be obtained. This processing for generating an image having a deep depth of field by the digital image processing is referred to also as focus stacking. Further, a method of selecting and merging regions that are in focus and have a high contrast, as shown in FIG. 3, is referred to as a select and merge method. In this embodiment, an example in which focus stacking is performed using this select and merge method will be described”, [0054]); identify, by the processor in response to the initial image set, an attribute of the sample related to a three dimensional (3D) structure of the sample, by performing analysis on the initial image set independently from a user (thickness of the specimen from the plurality of original images obtained from the specimen, [0011], program causing a computer to perform a method comprising the steps of: acquiring a plurality of original images acquired by imaging a specimen including a structure in various focal positions using a microscope apparatus; generating, on the basis of the plurality of original images, a first image on which blurring of an image of the structure has been reduced in comparison with the original images; and acquiring information relating to the structure included in the first image by applying image analysis processing to the first image, wherein, in the image generation step, a part of the original images having focal positions included within a smaller depth range than a thickness of the specimen is selected from the plurality of original images obtained from the specimen, and the first image is generated using the selected original images, [0014], A direction and amount of movement, and so on of the stage 202 are determined based on position information and thickness information (distance information) on the specimen obtained by measurement by the pre-measurement unit 217 and a instruction from the user, [0036], The pre-measurement unit 217 is a unit for performing pre-measurement as preparation for calculation of position information of the specimen on the slide 206, information on distance to a desired focal position, and a parameter for adjusting the amount of light attributable to the thickness of the specimen. The pre-measurement unit 217 learns the position of the specimen on an XY plane from the obtained images, [0046], a feature of this system is that an image having an appropriate depth of field (or contrast) is generated automatically in accordance with the application (image analysis by a computer or visual observation by a user), [0064], depth range may be designated by the user, but is preferably determined automatically by the image processing apparatus 102 in accordance with the size of the analysis subject, [0090]) [attribute = thickness, cell stain color] perform, in response to identifying the attribute, a plurality of three-dimensional (3D) processes, the plurality of 3D processes performed with the processor comprising at least two of the following: processing at least on the initial image set (imaging a specimen including a structure in various focal positions using a microscope apparatus, image processing, generation unit selects a part of the original images having focal positions included within a smaller depth range than a thickness of the specimen from the plurality of original images obtained from the specimen, generates the first image using the selected original images, [0011], Here, processing is performed to extract regions having a red to pink color gamut, using the fact that the cell is stained red to pink by the eosin. In the analysis image according to this embodiment, image blur is reduced by the focus stacking, and therefore edge extraction and subsequent contour extraction can be performed with a high degree of precision, [0108]) determining a plurality of focal planes for capturing the one or more subsequent images based at least on the attribute, capturing a plurality of images at a respective plurality of focal planes (A direction and amount of movement, and so on of the stage 202 are determined based on position information and thickness information (distance information) on the specimen obtained by measurement by the pre-measurement unit 217 and a instruction from the user, [0036], “The pre-measurement unit 217 is a unit for performing pre-measurement as preparation for calculation of position information of the specimen on the slide 206, information on distance to a desired focal position, and a parameter for adjusting the amount of light attributable to the thickness of the specimen. Acquisition of information by the pre-measurement unit 217 before main measurement makes it possible to perform efficient imaging. Further, designation of positions in which to start and terminate imaging (a focal position range) and an imaging interval (an interval between focal positions; also referred to as a Z interval) when obtaining images having different focal positions is also performed on the basis of the information generated by the pre-measurement unit 217”, [0046], In Step S702, the main control system 218 designates a Z direction imaging range on the basis of the Z direction distance information and thickness information of the slide, obtained in the pre-measurement. More specifically, an imaging start position (the cover glass lower surface, for example), an imaging end position (the slide glass upper surface, for example), and the imaging interval (the Z interval) are preferably designated, [0068]) [attribute = thickness, cell stain color]. Perz and Brown et al. and Tsujimoto are in the same art of focal planes (Perz, [0014]; Brown et al., [0039]; Tsujimoto, [0065]). The combination of Tsujimoto with Perz and Brown et al. enables performing, in response to identifying the attribute, a plurality of three-dimensional (3D) processes. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the performing of Tsujimoto with the invention of Perz and Brown et al. as this was known at the time of filing, the combination would have predictable results, and as Tsujimoto indicates “According to the method disclosed in Japanese Patent Application Publication No. 2005-037902, an image that is in focus as a whole and includes little blur can be obtained. However, although this type of deep-focus image is useful for rough observation of the specimen as a whole, it is not suitable for detailed observation of a part of the specimen or comprehension of a three-dimensional structure and a three-dimensional distribution of tissues, cells, and so on. The reason for this is that when focus stacking is performed, depth direction information is lost, and therefore a user cannot determine front-rear relationships between respective structures (cells, nuclei, and so on) in the image simply by viewing the image. Further, when structures originally existing in different depth direction positions are overlapped on the image at an identical contrast, it is difficult to separate and identify the structures not merely through visual observation but even through image analysis using a computer. The present invention has been designed in view of these problems, and an object thereof is to provide a technique for preserving depth direction information relating to a specimen so that the specimen can be observed using a digital image, and generating an image suitable for image analysis processing using a computer” ([0009]-[0010]) thereby providing a good technique for observing different sizes of specimens when the inventions are combined. Regarding claim 2, Perz, Brown et al., and Tsujimoto disclose the microscope of claim 1. Perz and Brown et al. further indicate the 3D process comprises the capturing of the one or more subsequent images of the sample using one or more illumination conditions of the illumination assembly (Brown et al., “For example, the aperture may be a ring or other fixture that holds an optical element in place, or it may be a diaphragm placed in the optical path to limit the amount of light that passes through the lens. The aperture may be adjusted to control the amount of light entering image capturing device 10,” [0020], “In one embodiment, focus stacking program 80 may determine an aperture value for the additional image(s) to be captured. For example, focus stacking program 80 determines an aperture values for each of the additional images utilizing the f-number that was used to capture the first image”, [0061]) and the one or more image capture settings for the image capture device (Perz, The method of performing the focusing operation can include setting operating parameters, capturing a low-magnification image, choosing focus point locations, moving to a selected focus point, analyzing the image to best determine a focus technique, determining the Z position search pattern, setting the initial focal plane, capturing the image, storing the image, calculating focus power, determining whether additional images at different focal planes are needed, determining whether to move to another focus point, selecting peak focus power, censoring focus points, and fitting a focal plane, [0014], selecting an automated focusing process for use at the selected focus point location, from among multiple automated focusing processes, based on the determined nature of the sample at the selected focus point location, [0019], “Depending on the nature of the sample, the stain or die used, or even the portion of the sample that falls within the microscope's field of view, different focus techniques may better determine the optimal focal plane(s). The present systems and techniques can determine which focus technique is best suited for each position on the X,Y plane on which the microscope is focused. Thus, different automated focusing processes (including commonly known focus techniques) can be used at different X,Y locations depending on the nature of the sample at each location, and multiple {X,Y,Z} coordinates obtained using the different automated focusing processes can be used to form a focal surface (e.g., a focal plane) to govern focusing at other X-Y locations on the sample.” [0021], “At 325, an image can be analyzed to best determine the focus technique for a selected focus point. Microscope imaging system 100 can evaluate the image that contains the selected focus point (or acquire and evaluate a higher power image of the focus point at a best guess focus) and determine which focus technique to use to best determine the optimal focal plane 290 for the selected focus point”, [0042], The system 100 can select from among the specified focus techniques based on the nature of the sample 230 at various selected X-Y focus points., [0043], “Microscope imaging system 100 can determine the initial position on Z axis 280 for focal plane 290, as well as subsequent positions along Z axis 280 for focal plane 290 above and below the initial position, from which to capture digital images. This determination can be based on a selected focus technique, the type of specimen being analyzed, the type of slide, other test related factors, or a combination of these,” [0047]; Brown et al., “Focus stacking program 80 causes the determined number of images necessary for focus stacking to be captured. Focus stacking program 80 receives image data, including a depth map, for each captured image. Focus stacking program 80 determines depth data for each captured image from each respective depth map. Depth data includes the distance between each focus point of an image and image capturing device 10 at the time that the image was taken. Focus stacking program 80 compares the depth data for each captured image to the depth data for the set of captured images. Focus stacking program 80 determines focus points that are in focus in each captured image. Focus stacking program 80 selects focus points that are in focus in each captured image. Focus stacking program 80 creates a final image that includes selected in-focus focus points of each captured image,” [0034]) [image capture settings = focus technique, focus power, different automated focusing processes, number of required images for focus stacking] [Adjustment to aperture is a way to change illumination conditions]. Regarding claim 4, Perz, Brown et al., and Tsujimoto disclose the microscope of claim 2. Perz and Brown et al. further indicate the 3D process comprises the determining of the plurality of focal planes for capturing the one or more subsequent images based at least on the attribute (Perz, determining whether additional images at different focal planes are needed, [0014], “Depending on the nature of the sample, the stain or die used, or even the portion of the sample that falls within the microscope's field of view, different focus techniques may better determine the optimal focal plane(s)”, [0021]; Brown et al., Focus stacking program 80 causes the determined number of images necessary for focus stacking to be captured. Focus stacking program 80 receives image data, including a depth map, for each captured image. Focus stacking program 80 determines depth data for each captured image from each respective depth map. Depth data includes the distance between each focus point of an image and image capturing device 10 at the time that the image was taken. Focus stacking program 80 compares the depth data for each captured image to the depth data for the set of captured images. Focus stacking program 80 determines focus points that are in focus in each captured image. Focus stacking program 80 selects focus points that are in focus in each captured image. Focus stacking program 80 creates a final image that includes selected in-focus focus points of each captured image., [0034]), and the one or more illumination conditions and the one or more image capture settings correspond to the plurality of focal planes (Brown et al., In one embodiment, focus stacking program 80 may determine an aperture value for the additional image(s) to be captured. For example, focus stacking program 80 determines an aperture values for each of the additional images utilizing the f-number that was used to capture the first image, [0061], four additional images at specific aperture values should be captured by image capturing device 10 for focus stacking, [0062]) [Adjustment to aperture is a way to change illumination conditions]. Regarding claim 5, Perz, Brown et al., and Tsujimoto disclose the microscope of claim 2. Perz and Brown et al. further indicate one or more subsequent images are taken at one or more locations of the sample determined based at least on the attribute (Perz, choosing focus point locations, moving to a selected focus point, determining whether additional images at different focal planes are needed, determining whether to move to another focus point, [0014], selecting an automated focusing process for use at the selected focus point location, [0019], “Depending on the nature of the sample, the stain or die used, or even the portion of the sample that falls within the microscope's field of view, different focus techniques may better determine the optimal focal plane(s). The present systems and techniques can determine which focus technique is best suited for each position on the X,Y plane on which the microscope is focused. Thus, different automated focusing processes (including commonly known focus techniques) can be used at different X,Y locations depending on the nature of the sample at each location, and multiple {X,Y,Z} coordinates obtained using the different automated focusing processes can be used to form a focal surface (e.g., a focal plane) to govern focusing at other X-Y locations on the sample,” [0021], At 325, an image can be analyzed to best determine the focus technique for a selected focus point. Microscope imaging system 100 can evaluate the image that contains the selected focus point (or acquire and evaluate a higher power image of the focus point at a best guess focus) and determine which focus technique to use to best determine the optimal focal plane 290 for the selected focus point, [0042], The system 100 can select from among the specified focus techniques based on the nature of the sample 230 at various selected X-Y focus points, [0043], Microscope imaging system 100 can determine the initial position on Z axis 280 for focal plane 290, as well as subsequent positions along Z axis 280
Read full office action

Prosecution Timeline

Sep 12, 2022
Application Filed
Jun 11, 2024
Non-Final Rejection — §103, §DP
Oct 15, 2024
Response Filed
Nov 14, 2024
Final Rejection — §103, §DP
Feb 14, 2025
Applicant Interview (Telephonic)
Feb 14, 2025
Examiner Interview Summary
Mar 18, 2025
Request for Continued Examination
Mar 19, 2025
Response after Non-Final Action
Apr 05, 2025
Non-Final Rejection — §103, §DP
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 06, 2025
Examiner Interview Summary
Aug 28, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602775
INTERPOLATION OF MEDICAL IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602793
Systems and Methods for Predicting Object Location Within Images and for Analyzing the Images in the Predicted Location for Object Tracking
2y 5m to grant Granted Apr 14, 2026
Patent 12602949
SYSTEM AND METHOD FOR DETECTING HUMAN PRESENCE BASED ON DEPTH SENSING AND INERTIAL MEASUREMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597261
OBJECT MOVEMENT BEHAVIOR LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12597244
METHOD AND DEVICE FOR IMPROVING OBJECT RECOGNITION RATE OF SELF-DRIVING CAR
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
98%
With Interview (+21.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 863 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month