Prosecution Insights
Last updated: April 19, 2026
Application No. 18/534,636

IMAGING SYSTEM, METHOD USED IN IMAGING SYSTEM, AND STORAGE MEDIUM STORING COMPUTER PROGRAM USED IN IMAGING SYSTEM

Non-Final OA §101§103§112
Filed
Dec 10, 2023
Examiner
DULANEY, KATHLEEN YUAN
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Panasonic Intellectual Property Management Co., Ltd.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
504 granted / 653 resolved
+15.2% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
685
Total Applications
across all art units

Statute-Specific Performance

§101
21.2%
-18.8% vs TC avg
§103
33.1%
-6.9% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
26.4%
-13.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 653 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 3, 4 and 22 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to nonelected groups, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/16/2025. Therefore, the restriction is made final herein. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. It is noted that the storage medium of claim 21 is a nonvolatile computer-readable medium, and thus, claim 21 is statutory. Furthermore, claims 1, 2 5-21 are considered eligible subject matter because the claims are not considered an abstract idea. Even if the claims could be considered an abstract idea, the claims contain a specific arrangement of device parts which are not generic computer parts, including the filter array and sensor arrangement, and further and provide a practical application of detecting a subject in a scene. Claim Objections Claims 1, 17-21 are objected to because of the following informalities: The claims contain the language of “the basis of”, where the examiner believes the applicant intends to claim “based on”. It is not entirely clear as of what part of, or if the whole part constitutes what follows “the basis of”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 2, 5-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the substance" in line 9. It is unclear as to which substance the applicant is referring to, because previously the applicant claims “at least one substance”, which may be multiple substances. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5-8, 10-13, 15, 20 and 21 are rejected under 35 U.S.C. 103(a) as being unpatentable over U.S. Patent Application Publication No. 20170089761 (McQuilkin et al) in view of U.S. Patent application Publication No. 20210012508 (Porcel Magnusson). Regarding claim 20, McQuilkin et al discloses a method to be performed by a computer, the method comprising: acquiring first image data (Fig. 1, item 32) obtained by imaging a target scene (fig. 1, target scene of target surface 109) by an image sensor (fig. 1, item 111), the image sensor imaging light passing through a filter array (fig. 1, item 106) that includes filters having different transmission spectra (page 10, paragraph 162) and generating image data (fig. 1, item 32); acquiring spectral pattern data. the selected spectral wavelengths of fig. 1, item 23, generated on the basis of subject data that includes spectral information of at least one type of subject, i.e. the target spectrum of the target subject (fig. 1, item 22) , the spectral pattern data being generated by predicting a spectral pattern detected when the subject is imaged by the image sensor (fig. 1, item 23 is generated based on what the target pattern is predicted as being of the target spectrum of fig. 1, item 22); and generating output data indicating whether the subject is present in the target scene by comparing the spectral pattern data with the first image data (fig. 1, item 35, fig. 1, item 37). McQuilkin et al does not disclose expressly the spectral data includes luminance data. Porcel Magnusson discloses spectral data is expressed as luminance data (page 11, paragraph 147). McQuilkin et al and Porcel Magnusson are combinable because they are from the same field of endeavor, i.e. multispectral data in target detection. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to express the data in luminance. The suggestion/motivation for doing so would have been to provide a more robust system by using a commonly used property. Therefore, it would have been obvious to combine the method of McQuilkin et al with the use of luminance of Porcel Magnusson to obtain the invention as specified in claim 20. Claim 21 is rejected for the same reasons as claim 1. Thus, the arguments analogous to that presented above for claim 1 are equally applicable to claim 21. Claim 21 distinguishes from claim 1 only in that claim 21 is a storage medium storing a program for causing a computer to perform the process of claim 1, the storage medium being a nonvolatile computer readable medium, and the extra step of outputting the output data. McQuilkin et al teaches further this feature, i.e. a nonvolatile computer readable medium for storing a program and carrying out the process (page 15, paragraph 208), and outputting the output data (fig. 1, output of item 37). Regarding claim 1, McQuilkin et al discloses an imaging system (fig. 1) comprising: a filter array (fig. 1, item 106) that includes filters having different transmission spectra (page 10, paragraph 162); an image sensor (fig. 1, item 112) that images light passing through the filter array, the light that passes from target surface, fig. 1, item 109, through the filter card, fig. 1, item 106, and to the camera array of fig. 1, item 111, and generates image data (Fig. 1, item 32); and a processing circuit, i.e. a computer (page 3, paragraph 21), wherein the processing circuit acquires spectral pattern data, the selected spectral wavelengths of fig. 1, item 23 and data of item 25, generated on the basis of subject data that includes spectral information of at least one substance, i.e. the target spectrum of the target substance (fig. 1, item 22), the spectral pattern data being generated by predicting a spectral pattern detected when the substance is imaged by the image sensor (fig. 1, item 23, 25 is generated based on what the target pattern is predicted as being of the target spectrum of fig. 1, item 22); acquires first image data obtained by imaging a target scene by the image sensor (fig. 1, item 32), and generates output data (fig. 1, item 37) regarding whether the substance is present in the target scene (fig. 1, item 35) by comparing the spectral pattern data with the first image data (fig. 1, item 35 compares fig. 1, item 34 and 25). Porcel Magnusson discloses spectral data is expressed as luminance data (page 11, paragraph 147). Regarding clam 2, McQuilkin et al discloses a storage device that stores the subject data, that which stores the data of fig. 1, item 21 in electronic memory (page 15, paragraph 208) and a table showing a spatial distribution of the transmission spectra of the filter array, i.e. the specific algorithms for the spectral characteristic for a filter card (page 31, paragraph 383), wherein the processing circuit acquires the subject data and the table from the storage device (page 31, paragraph 383- stores algorithm data which is acquired in fig. 1, item 22) and generates the spectral pattern data on the basis of the subject data and the table (fig. 1, items 23, 25 based on items 21, 22). Magnusson discloses spectral data is expressed as luminance data (page 11, paragraph 147). Regarding claim 5, McQuilkin et al discloses the spectral information of the at least one substance includes spectral information of substances (fig. 1, item 21, target substances, page 9, paragraph 150), and the output data is data regarding whether each of the substances is present in the target scene (fig. 1, item 122). Regarding claim 6, McQuilkin et al discloses the processing circuit determines whether the substance is present in the target scene by comparing the spectral pattern data with the first image data in a reference region that includes two or more pixels, i.e. the reference regions that are compared and detected in fig. 1, item 120 and 122, including all the pixels which is more than two pixels. The two or more pixels can also be interpreted as the corresponding pixels that are detected for each image for each spectral range selected in fig. 1, item 24, which are compared to find the target presence of fig. 1, item 35.. Regarding claim 7, McQuilkin et al discloses the number of the two or more pixels included in the reference region changes in accordance with the number of substances, because depending on the target substances as disclosed in page 9, paragraph 150, there would specify different range/ spectrums for imaging (page 9, paragraph 151) and thus the number of pixels would be different since the number of images would be different (page 9, paragraph 157). Regarding claim 8, McQuilkin et al discloses a target wavelength range for which light separation is performed by the imaging system includes n bands, i.e. the number of bands specified by fig. 1, item 24, the two or more pixels included in the reference region include n pixels including an evaluation pixel and a pixel near the evaluation pixel, i.e. the pixel of one band and a pixel of another nearby band of images of fig. 1, item 32, not plural substances but one substance is present in the reference region, when one substance is being detected as disclosed in page 9 paragraph 150, the filter array includes n filters corresponding to the n respective pixels/ images included in the reference region (page 9, paragraph 157), the n filters having different transmission spectra (fig. 1, item 157), and each of the n filters has a transmittance that is non-zero for all of the n bands, since the filters must have transmittance in order for data to be collected, as disclosed in page 12, paragraph 183. Regarding claim 10, Porcel Magnusson discloses the subject data/ registered data of the substance, further includes shape information of the at least one substance (page 3, paragraph 29). Regarding claim 11, McQuilkin et al discloses an output device, i.e. a display of a smartphone (page 2, paragraph 11), wherein the processing circuit makes the output device output a result of classification indicated by the output data (fig. 1, item 122, page 13, paragraph 188). Regarding claim 12, McQuilkin et al discloses the output device displays an image in which a label by type is added to a part in which the substance is present in the target scene, the color labeling by the type of area in which the substance is present (fig. 1, items 120, 122). Regarding claim 13, Porcel Magnusson discloses the output device displays at least one of a graph of a spectrum of the substance, i.e. spectral data (page 4, paragraph 43) or an image showing explanatory text about the substance, i.e. tag data, meta data, ToA, etc (Page 4, paragraph 43). Regarding claim 15, McQuilkin et al discloses each of the filters has two or more local maxima in a target wavelength range for which light separation is performed by the imaging system when interpreting each filter as a pair on the parts on the filter card (fig. 1, item 106), with each part having its own local maximum and thus the pair together having a pair of local maximum.. Claims 9 and 14 are rejected under 35 U.S.C. 103(a) as being unpatentable over McQuilkin et al in view of Porcel Magnusson, as applied to claim 1 and 11 above, and further in view of U.S. Patent Application Publication No. 20060013454 (Flewelling et al). Regarding claim 9, McQuilkin et al (as modified by Porcel Magnusson) discloses all of the claimed elements as set forth above . McQuilkin et al discloses the output data includes information about a presence of the substance at each pixel of the first image data (fig. 1, item 120) and/or information about a presence of the substance at pixels, in the first image data, corresponding to an observation target (fig. 1, item 120). McQuilkin et al (as modified by Porcel Magnusson) does not disclose expressly the output data is regarding a probability of a presence of the substance. Flewelling et al discloses output data is regarding a probability of a presence of the substance (page 10, paragraph 186). McQuilkin et al (as modified by Porcel Magnusson) & Flewelling et al are combinable because they are from the same field of endeavor, i.e. region classification and labeling. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to output a probability. The suggestion/motivation for doing so would have been to provide a more robust, user friendly system so a user can know the strength of match. Therefore, it would have been obvious to combine the system of McQuilkin et al (as modified by Porcel Magnusson) with output of probability of Flewelling et al to obtain the invention as specified in claim 9. Regarding claim 14, McQuilkin et al discloses the output device displays an image in which a label is added to an observation target (fig. 1, item 120 labeled with color), in the target scene (fig. 1, item 122). Flewelling et al discloses, for which a probability of presence of the substance falls below a specific value, the label indicating that classification of a type of the observation target is not possible, i.e. if the label of indeterminate region or low probability of high grade disease (page 81, paragraph 1133-1134, page 82, paragraph 1136). Claim 16 rejected under 35 U.S.C. 103(a) as being unpatentable over McQuilkin et al in view of Porcel Magnusson, as applied to claim 1 and 11 above, and further in view of U.S. Patent Application Publication NO. 20080123097 (Muhammed et al) Regarding claim 9, McQuilkin et al (as modified by Porcel Magnusson) discloses all of the claimed elements as set forth above and incorporated herein by reference. McQuilkin et al further discloses the filters include a group of filters of four or more types of filters (fig. 1, item 106, seven 104s). McQuilkin et al (as modified by Porcel Magnusson) does not disclose expressly the group of filters include a type of filter having a passband that overlaps a part of a passband of another type of filter. Muhammed et al discloses the group of filters include a type of filter having a passband that overlaps a part of a passband of another type of filter (pages 1-2, paragraph 14). McQuilkin et al (as modified by Porcel Magnusson) & Muhammed et al are combinable because they are from the same field of endeavor, i.e. multispectral imaging. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to overlap bands in the filter. The suggestion/motivation for doing so would have been to provide a more cost friendly system. Therefore, it would have been obvious to combine the system of McQuilkin et al (as modified by Porcel Magnusson) with the overlapping filters of Muhammed et al to obtain the invention as specified in claim 16. Claims 17 and 18 are rejected under 35 U.S.C. 103(a) as being unpatentable over McQuilkin et al in view of Porcel Magnusson, as applied to claim 1 and 11 above, and further in view U.S. Patent Application Publication No. 20160138975 (Ando et al). Regarding claim 17, McQuilkin et al (as modified by Porcel Magnusson) discloses all of the claimed elements as set forth above and is incorporated herein by reference. McQuilkin et al further discloses the first image data is compressed image data coded by the filter array (fig. 2, item 32 from item 104), and the processing circuit generates hyperspectral image data of the target scene on the basis of the image data of the target scene (fig. 1, item 33, 34). McQuilkin et al (as modified by Porcel Magnusson) does not disclose expressly the image data from the filter array is a compressed image data coded by the filter array and generating a hyperspectral image data from the compressed image data Ando et al discloses the image data from the filter array is a compressed image data coded by the filter array (page 1, paragraph 16) and generating a hyperspectral image data from the compressed image data, reconstructed image data of plurality of wavelengths (page 1, paragraph 16). McQuilkin et al (as modified by Porcel Magnusson) and Ando et al are combinable because they are from the same field of endeavor, i.e. capturing images at multiple spectrum. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use compressed image data. The suggestion/motivation for doing so would have been to a more efficient system that uses less data. Therefore, it would have been obvious to combine the system of McQuilkin et al (as modified by Porcel Magnusson) with the compressed image data of Ando et al to obtain the invention as specified in claim 17. Regarding claim 18, McQuilkin et al further discloses the first image data is image data coded by the filter array (fig. 2, item 32 from item 104), and the processing circuit makes the output device display a GUI for a user to give an instruction for generating hyperspectral image data of the target scene by allowing the user to manually select algorithms to begin the imaging process and thus generating the hyperspectral image data (page 11, paragraph 171), and generates in response to the instruction given by the user, the hyperspectral image data of the target scene on the basis of the image data of the target scene (fig. 1, item 33, 34). Ando et al discloses the image data from the filter array is a compressed image data coded by the filter array (page 1, paragraph 16) and generating a hyperspectral image data from the compressed image data, reconstructed image data of plurality of wavelengths (page 1, paragraph 16). Claim 19 is rejected under 35 U.S.C. 103(a) as being unpatentable over McQuilkin et al in view of Porcel Magnusson and Ando et al, as applied to claim 18 above, and further in view of U.S. Patent Application Publication No. 20210381893 (Balas). Regarding claim 19, McQuilkin et al (as modified by Porcel Magnusson and Ando et al) discloses all of the claimed elements as set forth above and is incorporated herein by reference. McQuilkin et al further discloses the first image data is image data coded by the filter array (fig. 2, item 32 from item 104), and the processing circuit makes the output device display a GUI for a user to give an instruction for generating instructions by allowing the user to manually select algorithms (page 11, paragraph 171), and further generates output data for displaying detection results (fig. 1, item 122) and also generates hyperspectral data (fig. 1, item 33, 34) and that spectral images are hyperspectral images (fig. 1, item . Ando et al discloses the image data from the filter array is a compressed image data coded by the filter array (page 1, paragraph 16) and generating a hyperspectral image data on the basis of the compressed image data, reconstructed image data of plurality of wavelengths (page 1, paragraph 16), and further spectral data gather is hyperspectral images (page 3, paragraph 64). McQuilkin et al (as modified by Porcel Magnusson and Ando et al) does not disclose expressly the output device displays a GUI for a user to give an instruction for switching between a first mode for generating the output data, i.e. displaying the processed results corresponding to McQuilkin et al, and a second mode for generating spectral image data of the target scene, i.e. hyperspectral image of Ando et al, generates the output data in response to an instruction for the first mode given by the user, and generates the spectral image data of the target scene in response to an instruction for the second mode given by the user. Balas discloses output device displays a GUI (page 15, paragraph 137) for a user to give an instruction for switching between a first mode for generating the output data, i.e. displaying the mapped results such as the dehazed images or the spectral maps (page 15, paragraph 137), and a second mode for generating spectral image data of the target scene, the spectral images (page 14, paragraph 137), generates the output data in response to an instruction for the first mode given by the user, and generates the spectral image data of the target scene in response to an instruction for the second mode given by the user, because the display output is selected by the user from the GUI (page 14, paragraph 137). McQuilkin et al (as modified by Porcel Magnusson and Ando et al) & Balas are combinable because they are from the same field of endeavor, i.e. displaying data. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to allow a GUI to switch between output modes. The suggestion/motivation for doing so would have been to provide a more user friendly system that allows the user to see all the data. Therefore, it would have been obvious to combine the system of McQuilkin et al (as modified by Porcel Magnusson and Ando et al) with the display mode GUI of Balas to obtain the invention as specified in claim 19. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN YUAN DULANEY whose telephone number is (571)272-2902. The examiner can normally be reached M1:9am-5pm, th1:9am-1pm, fri1 9am-3pm, m2: 9am-5pm, t2:9-5 th2:9am-5pm, f2: 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at 5712703717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN Y DULANEY/Primary Examiner, Art Unit 2666 1/8/2026
Read full office action

Prosecution Timeline

Dec 10, 2023
Application Filed
Jan 11, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602801
IMAGE PROCESSING CIRCUITRY AND IMAGE PROCESSING METHOD FOR DEPTH ESTIMATION IN A TIME-OF-FLIGHT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602930
METHOD AND SYSTEM FOR CONTINUOUSLY TRACKING HUMANS IN AN AREA
2y 5m to grant Granted Apr 14, 2026
Patent 12593019
INFORMATION PROCESSING APPARATUS USING PARALLAX IN IMAGES CAPTURED FROM A PLURALITY OF DIRECTIONS, METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586242
Method, System, And Computer Program For Recognizing Position And Attitude Of Object Imaged By Camera
2y 5m to grant Granted Mar 24, 2026
Patent 12586165
APPARATUS AND METHOD FOR RECONSTRUCTING IMAGE USING MOTION DEBLURRING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+24.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 653 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month