Prosecution Insights
Last updated: April 19, 2026
Application No. 18/246,015

IMAGE GENERATION SYSTEM, MICROSCOPE SYSTEM, AND IMAGE GENERATION METHOD

Final Rejection §103
Filed
Mar 20, 2023
Examiner
YAZBACK, MAHER
Art Unit
2877
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Sony Group Corporation
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
39 granted / 53 resolved
+5.6% vs TC avg
Strong +25% interview lift
Without
With
+24.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
24 currently pending
Career history
77
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
17.2%
-22.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments/Arguments Applicant’s amendments, see Page 12, filed 07/24/2025, with respect to claims 3, 8, 9 and 13-16 have been fully considered and are sufficient to overcome the interpretation of the claims under 35 USC 112(f) for claim limitations “a connection unit” in claims 13-16, “a separation processing unit” in claims 3, 8-9, 15, “a designation unit”, “a generation unit”, and “a conversion unit” in claim 13. The interpretation of the claims for those limitations has been withdrawn. However, upon further consideration the interpretation of “an imaging apparatus” in claims 1 and 15-17 is insufficient to overcome the interpretation of the claim under 35 USC 112(f). Applicant’s amendments, see Page 12, filed 07/24/2025, with respect to claim 13 under 35 USC 112(b) have been fully considered are sufficient to overcome the rejection of the claim. The rejection of claim 13 has been withdrawn. Applicant’s arguments, see Page 12-15, filed 07/24/2025, with respect to the rejection(s) of claim(s) 1-17 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Narusawa et al. (US 2011/0317937), Fereidouni et al. (US 2021/0199582 A1) and Frost et al. (US 2003/0086608 A1). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “imaging apparatus” in claims 1 and 15-17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Narusawa et al. (US 2011/0317937) in view of Fereidouni et al. (US 2021/0199582 A1) further in view of Frost et al. (US 2003/0086608 A1). Regarding claim 1, Narusawa discloses an image generation system, comprising: a central processing unit (101 - CPU) (Fig. 1; [0041]) configured to: acquire, from an imaging apparatus (image pickup apparatus), a plurality of partial images (7, 8) of a plurality of regions (12, 13), wherein (Fig. 2-4; [0050]-[0054]; the image pickup apparatus captures base and connection images, e.g., partial images, and sends information relating to these images to the image input unit 1; use of the image pickup apparatus for capturing a plurality of other regions over a larger subject is implied by any image stitching application and would be obvious to one of ordinary skill in the art); the plurality of regions includes a first region (12) and a second region (13), and the first region overlaps the second region (Fig. 2-4; [0050]-[0054]); determine connection information (connection position information and boundary information) associated with a channel (Fig 2, 11A-B; [0073] – where the channel interpreted as the frequency of the luminance signal); and connect, based on the connection information, the plurality of partial images (Fig 2, 11A-B; [0073]). Narusawa does not disclose steps to determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel and a set of channels, the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images connect, based on connection information, each of the second set of partial images. However, Fereidouni, in the same field of endeavor of fluorescent imaging and image processing, discloses a multispectral imaging system, which includes an imaging apparatus (118) configured to capture a plurality of partial images associated with a plurality of channels (Fig. 1A; [0026]), and connect the plurality of partial images from the plurality of channels to each other (Fig. 1A, 7 – steps 704, 706, 708; [0026]; [0049]; [0057], lines 9-22). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa with the teachings of Fereidouni which allows for the multispectral analysis of a given subject and an efficient and practical way of producing extracted targeted component images (Fereidouni: [0030]). Narusawa in view of Fereidouni does not explicitly disclose the plurality of partial images includes a first set of partial images and a second set of partial images different from the first set of partial images; determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel and a set of channels, the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images; determine connection information associated with the reference channel; and connect, based on the connection information, each of the second set of partial images. However, Frost, in the field of multispectral imaging and image processing, discloses an automated computation-based imaging apparatus and imaging processing method which comprises steps to align a plurality of partial images which includes a first set of partial images and a second set of partial images different from the first set of partial images (Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083] – where the method of correcting mis-registration of images of an object for different channels, ensuring object boundaries, is interpreted as determining a plurality of channels associated with the plurality of partial images); determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel (BF) and a set of channels (data channels), the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images (Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]); determine connection information associated with the reference channel (Fig. 6; [0083]; [0113]; [0149] – where the ROI delineation and boundary generation steps involve determining connection information); connect, based on the connection information, each of the second set of partial images ([0070] – implied by the output image files representing the objects properly aligned); It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa in view of Fereidouni with the method of Frost, which provides a method for identifying a region of interest (ROI) and equalizing and aligning the ROI of different wavelength channels with a reference channel. The motivation would be correct the mis-registration of images of an object that result from the characteristics like sensitivity, gain and signal-to-noise ration unique to each channel (Frost: [0066]; [0067]). Regarding claim 2, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 1, as outlined above, and further discloses wherein the CPU (118) is further configured to acquire the plurality of partial images associated with each of a plurality of wavelengths (Fereidouni: Fig. 1A, 7 – steps 704, 706, 708; [0026]; [0049]; [0057], lines 9-22). Regarding claim 3, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 2, as outlined above, and further discloses wherein the CPU is further configured to: perform a fluorescence separation process based on the plurality of partial images and a reference vector; and generate, based on the fluorescence separation process, a plurality of post- fluorescence separation images (Fereidouni: Fig. 7 – step 710; [0008]-[0009]; [0012], lines 1-3; [0057], last five lines; Pg. 5, right column – claims 3, 4, and 5 describe image processing methods including spectral-unmixing, spectral segmentation and processing methods of extracting targeted structural macromolecule-related tissue components from background elements in image data; such methods imply use of reference vectors for specific targeting of tissue components and a processing unit for executing instructions capable of producing fluorescent composite images for corresponding spectral bands). Regarding claim 4, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 1, as outlined above, and further discloses wherein each of the plurality of partial images (7, 8) is equal in size (XShot) (Narusawa: Fig. 4; [0055], lines 1-7 and last five lines). Regarding claim 5, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 3, as outlined above, and further discloses wherein the CPU is further configured to determine, based on the fluorescence separation process, a plurality of fluorescence channels (Fereidouni: Fig. 7 – steps 704, 706, 708; [0049]; [0052], lines 1-9; [0057], lines 9-22), the plurality of fluorescence channels includes a specific fluorescence channel and a first set of fluorescence channels, the specific fluorescence channel is different from the first set of fluorescence channels, the reference channel corresponds to the specific fluorescence channel of the plurality of fluorescence channels, and the set of channels corresponds to the first set of fluorescence channels (Frost: Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]). Regarding claim 6, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 3, as outlined above, and further discloses wherein the CPU is further configured to determine, based on the fluorescence separation process, a plurality of fluorescence channels and a plurality of wavelength channels (Fereidouni: Fig. 7 – steps 704, 706, 708; [0049]; [0052], lines 1-9; [0057], lines 9-22), the reference channel corresponds to a specific wavelength channel of the plurality of wavelength channels, and the set of channels corresponds to the plurality of fluorescence channels (Frost: Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]). Regarding claim 7, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 6, as outlined above, and further discloses wherein the plurality of wavelength channels includes a first set of wavelength channels, the specific wavelength channel (BF – reference channel) is different from the first set of wavelength channels, and the set of channels corresponds to the first set of wavelength channels (data channels) (Frost: Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]). Regarding claim 8, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 6, wherein the CPU is further configured to: perform the fluorescence separation process (implied by the image processing methods including spectral-unmixing, spectral segmentation and processing methods of extracting targeted structural macromolecule-related tissue components from background elements in image data disclosed by Fereidouni – see citation for the last limitation below) based the plurality of partial images associated with the plurality of wavelength channels and a reference vector; and generate, based on the performed fluorescence separation process, the plurality of post-fluorescence separation images for each of the plurality of regions (Fereidouni: [0026]; [0049]) and the reference vector to generate the plurality of post-fluorescence separation images for each of the regions (Fereidouni: Fig. 7 – steps 704, 706, 708; [0052], lines 1-9; [0057], lines 9-22; Pg. 5, right column – claims 3, 4, and 5). Regarding claim 9, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 6, wherein the CPU is further configured to: perform the fluorescence separation process (implied by the image processing methods including spectral-unmixing, spectral segmentation and processing methods of extracting targeted structural macromolecule-related tissue components from background elements in image data disclosed by Fereidouni – see citation for the last limitation below) based on a connection image (8) and a reference vector, wherein the connection image includes the plurality of partial images (7, 8), the plurality of partial images is associated with the plurality of wavelength channels, and each of the plurality of partial images is connected based on an overlap region between the first region and the second region (12, 13) (Narusawa: Fig. 2-4; [0050]-[0054]); and generate, based on the performed fluorescence separation process, the plurality of post-fluorescence separation images of the connection image (Fereidouni: Fig. 7 – steps 704, 706, 708; [0052], lines 1-9; [0057], lines 9-22; Pg. 5, right column – claims 3, 4, and 5). Regarding claim 10, Narusawa in view of Fereidouni and Frost disclose the image generation system according to claim 1, as outlined above, and further discloses wherein the CPU further configured to designate, based on a user input, the reference channel from the plurality of channels (Narusawa: Fig. 2; [0078]). Narusawa does not disclose wherein the at least one channel is a channel designated from the plurality of channels by a user. However, Fereidouni further discloses wherein the at least one channel is a channel designated from the plurality of channels by a user (Fereidouni: [0057] – implied by the display system that facilitates toggling among the captured images, including extracted targeted component images, interpreted as images corresponding to designated channels from a plurality of channels). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa to include a display system which allows a user to effectively select at least one channel from a plurality of channels for imaging a subject. The modification provides an efficient way for a user to access image data of a subject across a wide spectral range. Regarding claim 11, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 1, as outlined above, and further discloses wherein the CPU is further configured to determine the reference channel (Frost: Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]) based on at least one of a spatial frequency of a luminance signal of a specific image or a variance value of the luminance signal of the specific image and the specific image is associated with the plurality of channels (Narusawa: [0084]; [0085]). Regarding claim 12, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 1, as outlined above, and further discloses wherein the CPU is further configured to set the reference channel before the determination of the connection information (Fereidouni: [0057] – implied by the display system that facilitates toggling among the captured images, including extracted targeted component images, interpreted as images corresponding to designated channels from a plurality of channels; though Fereidouni does not explicitly disclose selecting one channel in advance, it remains obvious that the selection of a particular band(s) for imaging would be necessary in order to configure the measurement device for capturing the desired spectrum). Regarding claim 13, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 1, as outlined above, and Narusawa further discloses wherein the CPU is further configured to receive a user input to designate the reference channel from the plurality of channels, wherein the designation of the reference channel is prior to the determination of the connection information (Frost: [0083] – where selection [of the brightfield/reference channel] for deriving ROI delineation and the generation of object boundaries implies the claimed sequence); connect the plurality of partial images based on the determined connection information, wherein the plurality of partial images is associated with the reference channel information (connection position information and boundary information) (Narusawa: Fig 2, 11A-B; [0073]); generate a specific image that includes the connected plurality of partial images (Narusawa: Fig 2, 11A-B; [0073]); convert the generated specific image to a visible image (Narusawa: Fig. 2; [0078]); and control presentation of the visible image to a user (Narusawa: Fig. 2; [0078]). Regarding claim 14, Narusawa in view of Fereidouni and Frost discloses the image generation system according to claim 1, as outlined above, and further discloses wherein the CPU is further configured to generate the connection information (connection position information and boundary information) (Narusawa: Fig 2, 11A-B; [0073]) based on a variance value of a luminance signal for each pixel column of each of two adjacent partial images of the plurality of partial images, the two adjacent partial images include a first partial image and a second partial image, and the first partial image overlaps with the second partial image (Narusawa: [0084]; [0085]). Regarding claim 15, Narusawa discloses an image generation system comprising: an imaging apparatus (image pickup apparatus) configured to image a plurality of regions (12, 13) for each of a plurality of wavelengths, wherein the plurality of regions includes a first region (12) and a second region (13), and the first region overlaps the second region (Fig. 2-4; [0050]-[0054]); and a central processing unit (101 - CPU) (Fig. 1; [0041]) configured to: acquire, from the imaging apparatus, a plurality of partial images (7, 8) of the plurality of regions (12, 13). Narusawa does not disclose steps to acquire, from the imaging apparatus, a plurality of partial images of the plurality of regions, wherein the plurality of partial images is associated with the each of the plurality of wavelengths; perform fluorescence separation process based on the plurality of partial images and a reference vector; generate, based on the fluorescence separation process, a plurality of fluorescence-separated partial images; determine, from the plurality of fluorescence-separated partial images, a first set of fluorescence-separated partial images and a second set of fluorescence-separated partial images different from the first set of fluorescence- separated partial images; superimpose the first set of fluorescence-separated partial images with the second set of fluorescence-separated partial images; determine a plurality of superimposed images based on the superimposition of the first set of fluorescence-separated partial images with the second set of fluorescence-separated partial images; perform a RGB conversion process on the plurality of superimposed images; determine, based on the RGB conversion process, connection information associated with the plurality of superimposed images; connect, based on the connection information, each of the plurality of superimposed images; and generate, based on the connection of the plurality of superimposed images, a specific image that includes a subject. However, Fereidouni, in the same field of endeavor of fluorescent imaging and image processing, discloses a multispectral imaging system, which includes an imaging apparatus (118) configured to acquire a plurality of partial images (Fig. 1A; [0026]) and connect a plurality of partial images from a plurality of channels constituting the plurality of respective partial images to each other (Fig. 1A, 7 – steps 704, 706, 708; [0026]; [0049]; [0057], lines 9-22); perform a fluorescence separation process based on the plurality of partial images and a reference vector; and generate, based on the fluorescence separation process, a plurality of post- fluorescence separation images (Fereidouni: Fig. 7 – step 710; [0008]-[0009]; [0012], lines 1-3; [0057], last five lines; Pg. 5, right column – claims 3, 4, and 5 describe image processing methods including spectral-unmixing, spectral segmentation and processing methods of extracting targeted structural macromolecule-related tissue components from background elements in image data; such methods imply use of reference vectors for specific targeting of tissue components and a processing unit for executing instructions capable of producing fluorescent composite images for corresponding spectral bands). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa with the teachings of Fereidouni which allows for the multispectral analysis of a given subject and an efficient and practical way of producing extracted targeted component images (Fereidouni: [0030]). Narusawa in view of Fereidouni does not explicitly disclose steps to determine, from the plurality of fluorescence-separated partial images, a first set of fluorescence-separated partial images and a second set of fluorescence-separated partial images different from the first set of fluorescence- separated partial images; superimpose the first set of fluorescence-separated partial images with the second set of fluorescence-separated partial images; determine a plurality of superimposed images based on the superimposition of the first set of fluorescence-separated partial images with the second set of fluorescence-separated partial images; perform a RGB conversion process on the plurality of superimposed images; determine, based on the RGB conversion process, connection information associated with the plurality of superimposed images; connect, based on the connection information, each of the plurality of superimposed images; and generate, based on the connection of the plurality of superimposed images, a specific image that includes a subject. However, Frost, in the field of multispectral imaging and image processing, discloses an automated computation-based imaging apparatus and imaging processing method which comprises steps for aligning determine, from the plurality of fluorescence-separated partial images, a first set of fluorescence-separated partial images (BF – reference channel) and a second set of fluorescence-separated partial images different from the first set of fluorescence- separated partial images (Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]); superimpose the first set of fluorescence-separated partial images with the second set of fluorescence-separated partial images (Fig. 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]); determine a plurality of superimposed images based on the superimposition of the first set of fluorescence-separated partial images with the second set of fluorescence-separated partial images (see Fig. 4A-B – where determining a plurality of superimposed images is interpreted as aligning the data channel images with the reference channel); perform a RGB conversion process on the plurality of superimposed images (Fig. 13; [0027]; [0066]; [0100]– where the equalization process involves the modification of the sensitivity and gain of each wavelength channel in the imaging instrument, including a grayscale transformation, which is understood to include a kind of RGB conversion); determine, based on the RGB conversion process, connection information associated with the plurality of superimposed images ([0066]); connect, based on the connection information, each of the plurality of superimposed images ([0070]); and generate, based on the connection of the plurality of superimposed images, a specific image that includes a subject ([0070]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa in view of Fereidouni with the method of Frost, which provides a method for identifying a region of interest (ROI) and equalizing and aligning the ROI of different wavelength channels with a reference channel. The motivation would be correct the mis-registration of images of an object that result from the characteristics like sensitivity, gain and signal-to-noise ration unique to each channel (Frost: [0066]; [0067]). Regarding claim 16, Narusawa discloses a microscope system, comprising: a central processing unit (101 - CPU) (Fig. 1; [0041]) configured to: acquire, from an imaging apparatus (image pickup apparatus), a plurality of partial images (7, 8) of a plurality of regions (12, 13) (Fig. 2-4; [0050]-[0054]; the image pickup apparatus captures base and connection images, e.g., partial images, and sends information relating to these images to the image input unit 1; use of the image pickup apparatus for capturing a plurality of other regions over a larger subject is implied by any image stitching application and would be obvious to one of ordinary skill in the art); wherein the plurality of regions includes a first region (12) and a second region (13), and the first region overlaps the second region (Fig. 2-4; [0050]-[0054]); determine connection information (connection position information and boundary information) associated with a channel (Fig 2, 11A-B; [0073] – where the channel interpreted as the frequency of the luminance signal); and connect, based on the connection information, the plurality of partial images (Fig 2, 11A-B; [0073]). Narusawa does not disclose steps to determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel and a set of channels, the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images connect, based on connection information, each of the second set of partial images. However, Fereidouni, in the same field of endeavor of fluorescent imaging and image processing, discloses a multispectral imaging system, which includes an imaging apparatus (118) configured to capture a plurality of partial images (Fig. 1A; [0026]), and connect a plurality of partial images from a plurality of channels constituting the plurality of respective partial images to each other (Fig. 1A, 7 – steps 704, 706, 708; [0026]; [0049]; [0057], lines 9-22). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa with the teachings of Fereidouni which allows for the multispectral analysis of a given subject and an efficient and practical way of producing an extracted targeted component image (Fereidouni: [0030]). Narusawa in view of Fereidouni does not explicitly disclose the plurality of partial images includes a first set of partial images and a second set of partial images different from the first set of partial images; steps to determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel and a set of channels, the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images; determine connection information associated with the reference channel; and connect, based on the connection information, each of the second set of partial images. However, Frost, in the field of multispectral imaging and image processing, discloses an automated computation-based imaging apparatus and imaging processing method which comprises steps to align a plurality of partial images which includes a first set of partial images and a second set of partial images different from the first set of partial images (Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083] – where the method of correcting mis-registration of images of an object for different channels, ensuring object boundaries, is interpreted as determining a plurality of channels associated with the plurality of partial images); determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel (BF) and a set of channels (data channels), the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images (Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]); determine connection information associated with the reference channel (Fig. 6; [0083]; [0113]; [0149] – where the ROI delineation and boundary generation steps involve determining connection information); connect, based on the connection information, each of the second set of partial images ([0070] – implied by the output image files representing the objects properly aligned); It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa in view of Fereidouni with the method of Frost, which provides a method for identifying a region of interest (ROI) and equalizing and aligning the ROI of different wavelength channels with a reference channel. The motivation would be correct the mis-registration of images of an object that result from the characteristics like sensitivity, gain and signal-to-noise ration unique to each channel (Frost: [0066]; [0067]). Regarding claim 17, Narusawa discloses an image generation method, comprising in an image generation system: steps for acquiring, from an imaging apparatus (image pickup apparatus), a plurality of partial images (7, 8) of a plurality of regions (12, 13) (Fig. 2-4; [0050]-[0054]) (Fig. 2-4; [0050]-[0054]; the image pickup apparatus captures base and connection images, e.g., partial images, and sends information relating to these images to the image input unit 1; use of the image pickup apparatus for capturing a plurality of other regions over a larger subject is implied by any image stitching application and would be obvious to one of ordinary skill in the art); wherein the plurality of regions includes a first region (12) and a second region (13), and the first region overlaps the second region (Fig. 2-4; [0050]-[0054]); determining connection information (connection position information and boundary information) associated with a channel (Fig 2, 11A-B; [0073] – where the channel interpreted as the frequency of the luminance signal); and connecting, based on the connection information, the plurality of partial images (Fig 2, 11A-B; [0073]). Narusawa does not disclose steps to determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel and a set of channels, the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images connect, based on connection information, each of the second set of partial images. However, Fereidouni, in the same field of endeavor of fluorescent imaging and image processing, discloses a multispectral imaging system, which includes an imaging apparatus (118) configured to capture a plurality of partial images (Fig. 1A; [0026]), and connect a plurality of partial images from a plurality of channels constituting the plurality of respective partial images to each other (Fig. 1A, 7 – steps 704, 706, 708; [0026]; [0049]; [0057], lines 9-22). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa with the teachings of Fereidouni which allows for the multispectral analysis of a given subject and an efficient and practical way of producing an extracted targeted component image (Fereidouni: [0030]). Narusawa in view of Fereidouni does not explicitly disclose the plurality of partial images includes a first set of partial images and a second set of partial images different from the first set of partial images; steps to determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel and a set of channels, the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images; determine connection information associated with the reference channel; and connect, based on the connection information, each of the second set of partial images. However, Frost, in the field of multispectral imaging and image processing, discloses an automated computation-based imaging apparatus and imaging processing method which comprises steps to align a plurality of partial images which includes a first set of partial images and a second set of partial images different from the first set of partial images (Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083] – where the method of correcting mis-registration of images of an object for different channels, ensuring object boundaries, is interpreted as determining a plurality of channels associated with the plurality of partial images); determine a plurality of channels associated with the plurality of partial images, wherein the plurality of channels includes a reference channel (BF) and a set of channels (data channels), the set of channels is different from the reference channel, the reference channel is associated with the first set of partial images, and the set of channels is associated with the second set of partial images (Fig. 3 – block 38, 4A-B; [0067]; [0075], last 6 lines; [0081]; [0083]); determine connection information associated with the reference channel (Fig. 6; [0083]; [0113]; [0149] – where the ROI delineation and boundary generation steps involve determining connection information); connect, based on the connection information, each of the second set of partial images ([0070] – implied by the output image files representing the objects properly aligned); It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Narusawa in view of Fereidouni with the method of Frost, which provides a method for identifying a region of interest (ROI) and equalizing and aligning the ROI of different wavelength channels with a reference channel. The motivation would be correct the mis-registration of images of an object that result from the characteristics like sensitivity, gain and signal-to-noise ration unique to each channel (Frost: [0066]; [0067]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHER YAZBACK whose telephone number is (703)756-1456. The examiner can normally be reached Monday - Friday 8:30 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Iacoletti can be reached at (571)270-5789. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHER YAZBACK/Examiner, Art Unit 2877 /MICHELLE M IACOLETTI/Supervisory Patent Examiner, Art Unit 2877
Read full office action

Prosecution Timeline

Mar 20, 2023
Application Filed
Mar 27, 2025
Non-Final Rejection — §103
Jul 24, 2025
Response Filed
Nov 19, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601588
DEVICE AND METHOD FOR DETERMINING THE THREE-DIMENSIONAL GEOMETRY OF AN INDIVIDUAL OBJECT
2y 5m to grant Granted Apr 14, 2026
Patent 12601632
Auto-focus for Spectrometers
2y 5m to grant Granted Apr 14, 2026
Patent 12591061
OBJECT RECOGNITION SYSTEM AND OBJECT RECOGNITION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12584786
DETECTING VIBRATION OF A CABLE OF AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12560754
Wavelength Reference Having Repeating Spectral Features and Unique Spectral Features
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
98%
With Interview (+24.8%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month