Prosecution Insights
Last updated: April 19, 2026
Application No. 18/540,962

SIGNAL PROCESSING APPARATUS AND SIGNAL PROCESSING METHOD

Non-Final OA §101§103
Filed
Dec 15, 2023
Examiner
VO, QUANG N
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Panasonic Intellectual Property Management Co., Ltd.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
80%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
439 granted / 612 resolved
+9.7% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 612 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 12-15, 18, 21 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected species II, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 02/17/2026. Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/19/2022 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Applicant has not provided an explanation of relevance of cited document(s) discussed below. Reference US 2016/0138975 A1 is a general background reference covering: An imaging apparatus according to an aspect of the present disclosure includes a first coding element that includes regions arrayed two-dimensionally in an optical path of light incident from an object, and an image sensor. Each of the regions includes first and second regions. A wavelength distribution of an optical transmittance of the first region has a maximum in each of first and second wavelength bands, and a wavelength distribution of an optical transmittance of the second region has a maximum in each of third and fourth wavelength bands. At least one selected from the group of the first and second wavelength bands differs from the third and fourth wavelength bands. The image sensor acquires an image in which components of the first, second, third and fourth wavelength bands of the light that has passed through the first coding element are superimposed on one another. (see abstract). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 20 and 22 are directed to a recording medium. The specification discloses a method, an integrated circuit, a computer program, a computer-readable storage medium, or any selective combination thereof. Examples of the computer-readable storage medium include a non-volatile storage medium, paragraph 9, and including signal, paragraph 66. In the state of the art, transitory signals are commonplace as a medium for storing computer instructions and thus, in the absence of any evidence to the contrary, given its broadest reasonable interpretation in light of the specification, the scope of the claimed " computer readable medium ", covers both non-transitory media and a transitory media (a signal per se). A transitory signal does not fall within the definition of a process, machine, manufacture, or composition of matter. The claim may be amended to narrow the claim to cover only statutory embodiments by adding the limitation "non-transitory" to the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11, 16, 17, 19, 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Nipe et al. (Nipe) (US 2018/0365820 A1) in view of Ishibashi (US 7,035,457 B2). Regarding claim 1, Nipe discloses a signal processing method executed by a computer (e.g., FIG. 1 shows a block diagram of a system for hyperspectral image processing 100, paragraph 28), comprising: acquiring compressed image data including two-dimensional image information of a subject obtained by compressing hyperspectral information in a target wavelength region (e.g., The system may include memory and at least one processor to acquire a hyperspectral image of an object by an imaging device, the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, select a region of interest in the hyperspectral image,, paragraph 6); acquiring reference spectrum data including information on one or more spectra associated with the subject (e.g., the region of interest comprising a subset of at least one image in the set of images, extract spectral features from the region of interest in the hyperspectral image, compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the object, and identify the object based on the spectral features, paragraph 6); and Nipe does not specifically disclose generating, from the compressed image data, pieces of two-dimensional image data corresponding to designated wavelength bands decided on a basis of the reference spectrum data. Ishibashi discloses generating, from the compressed image data, pieces of two-dimensional image data corresponding to designated wavelength bands decided on a basis of the reference spectrum data (e.g., The present invention has been accomplished under these circumstances and has as an object providing a method by which a multispectral image reconstructed from a plurality of spectral images obtained by recording a subject in a wavelength range as divided into a plurality of bands can be compressed at an increased ratio but without causing visual deterioration and which thereby improves the efficiency of image data handling, paragraph 8). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to have modified Nipe to include generating, from the compressed image data, pieces of two-dimensional image data corresponding to designated wavelength bands decided on a basis of the reference spectrum data as taught by Ishibashi. It would have been obvious to one of ordinary skill in the art at the time of the invention to have modified Nipe by the teaching of Ishibashi to maintain a high quality video image. Regarding claim 2, Nipe discloses wherein the one or more spectra are associated with one or more kinds of substances assumed to be contained in the subject (e.g., the system may determine characteristics of the object by comparing the region of interest in the hyperspectral image with the plurality of images in the training set, paragraph 6). Regarding claim 3, Nipe discloses wherein each of the designated wavelength bands includes a peak wavelength of the spectrum associated with a corresponding one of the one or more kinds of substances (e.g., the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, select a region of interest in the hyperspectral image, the region of interest comprising a subset of at least one image in the set of images, extract spectral features from the region of interest in the hyperspectral image, compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the object, paragraph 6). Regarding claim 4, Nipe discloses wherein the reference spectrum data includes information on spectra associated with kinds of substances assumed to be contained in the subject (e.g., the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, paragraph 6); and the designated wavelength bands include a first designated wavelength band having no overlapping between the spectra and a second designated wavelength band having overlapping between the spectra (e.g., One hyperspectral system may extract spectral values between 400 nanometers and 600 nanometers. A second hyperspectral system may extract spectral values between 400 nanometers and 1000 nanometers. The minimum viable set of features between the first hyperspectral system and the second hyperspectral system may be 400 nanometers to 600 nanometers. That may allow the computing device 110 to detect patterns between spectral values and a ripeness quality of an object such as an avocado. The output module 210 may further perform extraction of spectral data from the object, paragraph 66). 66 Regarding claim 5, Ishibashi discloses wherein the compressed image data is generated by using a filter array including kinds of optical filters that are different from each other in spectral transmittance and an image sensor (e.g., a variable filter 14 for dividing the recording wavelength range into a plurality of bands; a CCD camera 16 for taking a multiband image of the subject O; a multiband image data storage unit 18 for saving the captured image data temporarily; a multispectral image acquisition unit 20 for reconstructing a multispectral image from the multiband image by estimating the spectral reflectance distribution for each pixel; multispectral image compressing unit 22 for compressing the image data for the multispectral image at a higher ratio while suffering limited visual deterioration; and a recording medium drive unit 24 for saving the compressed image data, paragraph 9); the signal processing method further comprises acquiring mask data reflecting a spatial distribution of the spectral transmittance; and the pieces of two-dimensional image data are generated on a basis of the compressed image data and the mask data (e.g., It is further preferable that the compressed image data for the multispectral image are expressed not only by the optimum principal component number of sets of the optimum principal component images and the optimum principal component vectors but also by tile image information having information about tile numbers of the tile images, a tile position and an image size of the tile images (mask data), paragraph 12). Regarding claim 6, Ishibashi discloses wherein the mask data includes mask matrix information having elements according to a spatial distribution of transmittance of the filter array for each of unit bands included in the target wavelength region (e.g., The variable filter 14 is a bandpass filter with which the recording wavelength range can be divided into a variable number of bands, say, 16, 21, 41, 81, 201 bands or the like for the purpose of capturing a multiband image of the subject O. A useful example of the variable filter 14 is a liquid-crystal tunable filter, paragraph 11); and the signal processing method further comprises: generating synthesized mask information by synthesizing the mask matrix information corresponding to non-designated wavelength bands different from the designated wavelength bands in the target wavelength region (e.g., The CCD camera 16 is a device with which the subject O imaged by the light reflected from it to pass through the variable filter 14 so that it is segmented into a desired number of spectral wavelength bands is captured as black-and-white band images. The image-receiving plane of this camera has a planar array of CCDs (charge-coupled devices) as area sensors. For proper setting of the dynamic range for the lightness values of the image to be taken, the CCD camera 16 includes a mechanism for adjusting the white balance before taking the picture of the subject O, paragraph 12); and generating synthesized image data concerning the non-designated wavelength bands on a basis of the compressed image data and the synthesized mask information (e.g., The multiband image data storage unit 18 is a site for temporary storage and saving of the multiband image composed of a plurality of band images (which is synthesized mask) that have been captured in the recording wavelength range as divided into multiple bands and which have the white balance adjusted in association with the respective bands, paragraph 13). Regarding claim 7, Nipe discloses wherein the generating the pieces of two-dimensional image data includes generating and outputting the pieces of two-dimensional image data corresponding to the designated wavelength bands without generating, from the compressed image data, image data corresponding to a non-designated wavelength band different from the designated wavelength bands in the target wavelength region (e.g., the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, select a region of interest in the hyperspectral image, the region of interest comprising a subset of at least one image in the set of images, extract spectral features from the region of interest in the hyperspectral image, compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the object, and identify the object based on the spectral features, paragraph 6). Regarding claim 8, Nipe discloses wherein the designated wavelength bands are decided on a basis of an intensity of the one or more spectra indicated by the reference spectrum data or a differential value of the intensity (e.g., each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalizing, by the processor, the hyperspectral image of the object, selecting, by the processor, a region of interest in the hyperspectral image, the region of interest comprising a subset of at least one image in the set of images, paragraph 7). Regarding claim 9, Nipe discloses wherein the reference spectrum data includes information on a fluorescence spectrum of one or more substances assumed to be contained in the subject (e.g., a hyperspectral image 105 of an object 104 from the imaging device 108. The hyperspectral image 105 of the object 104 may be a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, paragraph 77). Regarding claim 10, Nipe discloses wherein the reference spectrum data includes information on a light absorption spectrum of one or more substances assumed to be contained in the subject (e.g., a hyperspectral image 105 of an object 104 from the imaging device 108. The hyperspectral image 105 of the object 104 may be a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, paragraph 77). Regarding claim 11, Nipe discloses further comprising displaying, on a display, a graphical user interface for allowing a user to designate the one or more spectra or one or more kinds of substances associated with the one or more spectra, wherein the reference spectrum data is acquired in accordance with the designated one or more spectra or the designated one or more kinds of substances (e.g., the system for hyperspectral image processing 100 may include at least one imaging device 108. In one example, the imaging device 108 may be a chemical machine vision camera such as a hyperspectral camera that may include one or more optical sensors configured to detect electromagnetic energy that is incident on the sensor. Any number of various optical sensors may be used to obtain images in various spectral regions for use in analyzing properties of the one or more objects 104. As an example, the imaging device 104 may be configured to collect images in the 400-1000 nanometer (nm) wavelength region, which corresponds to visible and near-infrared light, paragraph 29). 29…820 Regarding claim 16, Nipe discloses a signal processing method executed by a computer, comprising: acquiring compressed image data including two-dimensional image information of a subject obtained by compressing hyperspectral information in a target wavelength region (e.g., The system may include memory and at least one processor to acquire a hyperspectral image of an object by an imaging device, the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, select a region of interest in the hyperspectral image,, paragraph 6); acquiring reference spectrum data including information on one or more spectra associated with the subject (e.g., the region of interest comprising a subset of at least one image in the set of images, extract spectral features from the region of interest in the hyperspectral image, compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the object, and identify the object based on the spectral features, paragraph 6). Nipe does not specifically disclose displaying, on a display, a graphical user interface for designating a reconstruction condition for generating pieces of two-dimensional image data corresponding to designated wavelength bands from the compressed image data and an image based on the reference spectrum data. Ishibashi discloses displaying, on a display, a graphical user interface for designating a reconstruction condition for generating pieces of two-dimensional image data corresponding to designated wavelength bands from the compressed image data and an image based on the reference spectrum data (e.g., a method for compressing a multispectral image obtained by using band images of a subject captured in a wavelength range divided into a plurality of bands, comprising the steps of performing logarithmic conversion of the multispectral image and segmentation of the multispectral image into a plurality of tile images to obtain logarithmically converted tile image data; performing principal component analysis on the logarithmically converted tile image data of respective tile images to obtain for each tile image a principal component number of sets of principal component vectors and principal component images based on the multispectral image; determining from the plurality of sets, for each tile image, an optimum principal component number of sets of optimum principal component vectors and corresponding optimum principal component images that optimally represent image information about the multispectral image; and expressing compressed image data for the multispectral image by means of at least the optimum principal component number of sets of optimum principal component images and optimum principal component vectors for each tile image, paragraph 10). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to have modified Nipe to include displaying, on a display, a graphical user interface for designating a reconstruction condition for generating pieces of two-dimensional image data corresponding to designated wavelength bands from the compressed image data and an image based on the reference spectrum data as taught by Ishibashi. It would have been obvious to one of ordinary skill in the art at the time of the invention to have modified Nipe by the teaching of Ishibashi to use for particular application. Regarding claim 17, Nipe discloses signal processing apparatus comprising: a processor; and a memory in which a computer program executed by the processor is stored (e.g., The system may include memory and at least one processor to acquire a hyperspectral image of an object by an imaging device, the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, paragraph 6). The remaining of claim 17 limitations are similar of limitations of claim 1. Therefore, the remaining of claim 17 limitations are rejected as set forth above as claim 1. Regarding claim 19, Nipe discloses signal processing apparatus comprising: a processor; and a memory in which a computer program executed by the processor is stored (e.g., The system may include memory and at least one processor to acquire a hyperspectral image of an object by an imaging device, the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, paragraph 6). The remaining of claim 19 limitations are similar of limitations of claim 16. Therefore, the remaining of claim 19 limitations are rejected as set forth above as claim 16. Regarding claim 20, claim 20 is a recording medium claim with limitations are similar of limitations of claim 1. Therefore, claim 20 is rejected as set forth above as claim 1. Regarding claim 22, claim 22 is a recording medium claim with limitations are similar of limitations of claim 16. Therefore, claim 22 is rejected as set forth above as claim 16. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUANG N VO whose telephone number is (571)270-1121. The examiner can normally be reached Monday-Friday, 7AM-4PM, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUANG N VO/Primary Examiner, Art Unit 2683
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592002
COLOR CONVERSION SYSTEM, COLOR CONVERSION METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12577842
METHOD AND SYSTEM FOR MEASURING VOLUME OF A DRILL CORE SAMPLE
2y 5m to grant Granted Mar 17, 2026
Patent 12581023
GREYSCALE IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12572996
FRACTIONALIZED TRANSFERS OF SENSOR DATA FOR STREAMING AND LATENCY-SENSITIVE APPLICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12573172
IMAGE OUTPUTTING DEVICE AND IMAGE OUTPUTTING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
80%
With Interview (+8.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 612 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month