DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10, 15, and 21-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Regarding Claim 1, the claim(s) recites “identifying a shared modulation in the aligned first and second light intensity signals”
“correlating the shard modulation to a cardiac parameter” which amounts to an abstract idea (mental process).
This judicial exception is not integrated into a practical application because:
- The claims fail to outline an improvement to the technical field.
- The claims fail to apply the judicial exception to effect a particular treatment.
- The claims fail to apply the judicial exception with a particular machine.
- The claims fail to effect a transformation or reduction of a particular article to a different state or thing.
Next, the claim as a whole is analyzed to determine whether any element or a combination of elements, integrates judicial exception into a practical application.
For this part of the 101 analysis, the following additional limitations are considered: “projecting a light feature onto a surface of the patient in the ROI;”
“acquiring an image stream of the ROI over time, the image stream comprising reflections of the projected light feature”
“measuring, from the image stream, a first reflected light intensity from the reflected light feature at a first location in the ROI,”
“measuring, from the image stream, a second reflected light intensity from the reflected light feature at a second location in the ROI different from the first location, wherein both the first and second reflected light comprises an amount or brightness of reflected light independent of color change”
“providing a first light intensity signal comprising measurements of the first reflected light intensity over time;”
“providing a second light intensity signal comprising measurements of the second reflected light intensity over time;”
“Aligning a phase of the first light intensity signal and the second light intensity signal;”
“displaying the cardiac parameter on a display for monitoring of the patient.”
The additional elements are insufficient to amount to significantly more than the judicial exception because they seem to merely generally link the use of the judicial exception to a particular technological environment.
Moreover, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they pertain merely to insignificant extrasolution data gathering activities.
None of these limitations, considered as an ordered combination provide eligibility because the claim taken as a whole, does not amount to significantly more than the underlying abstract idea of determining a cardiac parameter of the patient by analyzing reflected light intensity information from a region of interest and does not purport to improve the functioning of the signal processing, or to improve any other technology or technical field. Use of a generic signal processing does not amount to significantly more than the abstract idea itself. Dependent claims 2-10 and 15 also do not add significantly more to the exception as they merely add details to the mental steps, add details to the extrasolution data gathering steps, add general field of use components to facilitate the extrasolution data gathering, and add mental steps.
Regarding Claim 21, the claim(s) recites “identifying peaks in the combined light intensity signal”
“deriving a heart rate of the patient from a timing of the peaks” which amounts to an abstract idea (mental process).
This judicial exception is not integrated into a practical application because:
- The claims fail to outline an improvement to the technical field.
- The claims fail to apply the judicial exception to effect a particular treatment.
- The claims fail to apply the judicial exception with a particular machine.
- The claims fail to effect a transformation or reduction of a particular article to a different state or thing.
Next, the claim as a whole is analyzed to determine whether any element or a combination of elements, integrates judicial exception into a practical application.
For this part of the 101 analysis, the following additional limitations are considered:
“projecting, by a projection, infrared light over a region of interest (ROI) of a patient, wherein the projector does not contact the patient;”
“acquiring, by a camera, an image stream of the ROI over time, the image stream comprising reflections of the projected infrared light;”
“identifying, by a processor, a plurality of individual light intensity signals within the image stream, each individual light intensity signal of the plurality of individual light intensity signals comprising reflectance over time at a location in the ROI, and wherein the reflectance is a measure of reflected light intensity that is independent of color change;”
“aligning a phase of the plurality of individual light intensity signals;”
“producing a combined light intensity signal by combining the aligned plurality of individual light intensity signals;”
The additional elements are insufficient to amount to significantly more than the judicial exception because they seem to merely generally link the use of the judicial exception to a particular technological environment.
Moreover, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they pertain merely to insignificant extrasolution data gathering activities.
Furthermore, projectors and camera are general field of use and processors are generic computer elements used to perform generic computer functions and don’t add significantly more and are well-understood, routine, and previously known to the industry.
None of these limitations, considered as an ordered combination provide eligibility because the claim taken as a whole, does not amount to significantly more than the underlying abstract idea of determining a cardiac parameter of the patient by analyzing light reflectance information from a region of interest and aligning the phase of the individual light intensity signals before they are used for analysis and does not purport to improve the functioning of the signal processing, or to improve any other technology or technical field. Use of a generic signal processing does not amount to significantly more than the abstract idea itself. Dependent claims 22-25 also do not add significantly more to the exception as they merely add details to the mental steps, add details to the extrasolution data gathering steps, add general field of use components to facilitate the extrasolution data gathering, and add mental steps.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-6, 9-10, 15, and 21-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe (US 2018/0000359) in view of Jacquel et al (US 2017/0238842) (“Jacquel”) as noted in Applicant IDS dated 3/19/2022 and further in view of Gunther et al (US 2018/0064369) (“Gunther”) and further in view of Halter et al (US 2017/0224288) (“Halter”).
Regarding Claim 1, while Watanabe teaches a method of monitoring a patient by a non-contact patient monitoring system in a region of interest (ROI), over time (Abstract, Fig. 2, [0158]-[0161]), the method comprising:
determining a cardiac waveform of the patient using reflected light intensity information (Fig. 2, [0158]) by:
projecting a light feature onto a surface of the patient in the ROI ([0159] dot pattern / light feature projected onto the living-body surface of the patient);
acquiring an image stream of the ROI over time, the image stream comprising reflections of the projected light feature ([0161], [0163]-[0164])
measuring, from the image stream, a first reflected light intensity from the reflected light feature at a first location in the ROI, wherein both the first reflected light comprises an amount or brightness of reflected light independent of color change ([0161], [0163], [0164]);
providing a first light intensity signal comprising measurements of the first reflected light intensity over time ([0161], [0163]-[0164], [0180]-[0182]);
identifying a modulation in the first light intensity signals ([0161], [0163]-[0164], [0180]-[0182]);
correlating the modulation to a cardiac parameter ([0164], [0180]-[0182]); and
displaying the cardiac parameter on a display for monitoring of the patient ([0163] processing results of the system displayed), and
Watanabe further teaches that evaluating the blood flow related parameters at the face is improved by comparison between sub-regions ([0234], [0240]).
Watanabe fails to teach
measuring, from the image stream, a second reflected light intensity from the reflected light feature at a second location in the ROI different from the first location, wherein both the first reflected light intensity and the second reflected light intensity comprise an amount or brightness of reflected light independent of color change;
providing a second light intensity signal comprising measurements of the second reflected light intensity over time;
aligning a phase of the first light intensity signal and the second light intensity signal;
identifying a shared modulation in the first and second light intensity signals; and
correlating the shared modulation to a cardiac parameter.
However Jacquel teaches a video-based monitoring of vital signs (Abstract, Fig. 2A, [0051], [0057], [0065]-[0066], [0068], [0070]-[0073], [0078], [0090]) comprising
acquiring an image stream of a ROI over time, the image stream comprising reflections of the subject (Fig. 2A, [0051], [0057], [0065]-[0066], [0070]-[0073], [0078] acquire an image stream of a region of interest over time, considers light intensity changes reflected from a patient);
measuring, from the image stream, a first reflected light intensity from the reflected light at a first location in the ROI ([0068] the head region 314 is recognized as the region of interest, [0051], [0057], [0078] measuring light intensity from light reflected off the subject at a region of interest of the head with a sub-region / first location of a forehead and a sub-region / second location of a cheek),
measuring, from the image stream, a second reflected light intensity from the reflected light at a second location in the ROI different from the first location ([0068] the head region 314 is recognized as the region of interest, [0051], [0057], [0078] measuring light intensity from light reflected off the subject at a region of interest of the head with a sub-region / first location and a sub-region / second location. Examples of combined sub-regions include the forehead and cheek),
wherein both the first reflected light intensity and the second reflected light intensity comprise an amount or brightness of reflected light independent of color change ([0065] where the light intensity signal may specifically be the amount of brightness measured);
providing a second light intensity signal comprising measurements of the second reflected light intensity over time ([0051], [0057], [0065], [0078]);
aligning a phase of the first light intensity signal and the second light intensity signal ([0072] the pixels within a sub-region, such as the forehead, may be further divided into sub-regions, i.e. forehead regions 1A, 2A, 3A. The component pixels are combined together with a weighted average to produced a combined/summed/weighted average of the sub-region, thus the pixels reflecting the first light intensity signal and the pixels reflecting the second light intensity signal are combined together, reflecting a new singular average intensity signal. In doing so, the phase of the disparate light intensity signals have been aligned by coalescing into a single signal, [0090] confirms performing this step for sub-regions of the head and how this leads into a pixels reflecting a singular, aligned common modulation);
identifying the modulation of the aligned first and second light intensity signals ([0072]-[0073] the pixels within a region, i.e. the head region 314, may be combined together with a weighted average. In doing so, the pixels reflecting the first light intensity signal and the pixels reflecting the second light intensity signal are combined together, reflecting a new singular average intensity signal, [0078] “The intensity signals from non-adjacent regions are averaged together to create a combined signal, and the heartrate measured from that combined signal.” [0090]);
correlating the modulation to a cardiac parameter ([0072]-[0073], [0078], [0090])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further perform light-intensity-based non-contact video monitoring of a subject as taught by Watanabe at a second relevant location as taught by Jacquel as this provides increased accuracy by cross-referencing a predictive modulation of heart rate across the face, ruling out unrelated changes in the face due to the influence of, for example, the autonomic nervous system. This supports teachings of Watanabe with Jacquel providing clear steps on how to achieve the improved accuracy. Finally, it would be obvious to perform the weighted average determination as an alignment step and output a single modulation signal as taught by Jacquel as a standardization step for estimating the single modulation in light intensity signals that most represents the physiological signals, ensuring consistency across applications of the invention.
Yet their combined efforts fail to teach
identifying a shared modulation in the first and second light intensity signals; and
correlating the shared modulation to a cardiac parameter.
However Gunther teaches an imaging-based physiological monitor (Abstract, [0040], [0042], [0078]),
The imaging is performed on separate regions ([0053]-[0056] different forms of dividing subject images into regions) and pixel intensities at different locations on the body may show inverted phases for the same physiological phenomenon ([0059], [0063]-[0064]);
Accounting for a phase difference between light intensity signals ([0063]);
identifying a shared modulation in the aligned light intensity signals ([0063]-[0064] after coefficient correction and summation, you will output a final breathing signal 183); and
correlating the shared modulation to a physiological parameter ([0063]-[0064]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to perform a preliminary aligning step as taught by Gunther before identifying a modulation in the various light intensity signals by averaging taught in Jacquel as different regions of the imaged subject may show an exact opposite phase for the same physiological phenomenon (Gunther: [0063]). Thus a summation of regions in Jacquel could cancel out the modulation from a physiological signal, rather than emphasize it. Further, while it is not stated explicitly how the coefficients account for phases differences, aligning phases through the use of correcting coefficients would be familiar to those of ordinary skill in the imaging-based physiological monitoring art (Halter: [0083]-[0084]).
Regarding Claim 2, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, wherein correlating the change to the cardiac parameter comprises:
obtaining a pattern in the change in profile of the surface over time (See Claim 1 Rejection¸ Jacquel: [0089]-[0090] the light intensity changes reflect a pattern in the change in profile of the surface over time); and
correlating the pattern to the cardiac parameter (See Claim 1 Rejection).
Regarding Claim 3, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, wherein the cardiac parameter is heart rate (See Claim 1 Rejection).
Regarding Claim 4, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, wherein projecting the light feature comprises: projecting a IR light feature (See Claim 1 Rejection, a near-infrared light feature of a dot array, where near infrared light is understood to be a subset of infrared light as noted by Applicant in paragraph [0035] of the Specification dated 1/31/2022).
Regarding Claim 5, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, wherein projecting the light feature comprises: projecting the light feature on a forehead of the patient (See Claim 1 Rejection).
Regarding Claim 6, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, wherein projecting the light feature onto the surface of the patient in the ROI comprises: projecting a plurality of light features onto the surface of the patient in the ROI (See Claim 1 Rejection, the dot pattern).
Regarding Claim 9, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, and Jacquel further teaches the method comprising:
Combining the first and second light intensity signals into a combined light intensity signal (See Claim 1 Rejection, [0078], [0088] grouped light intensity of different regions may be combined to specifically produce a signal that modulates with respiration rate);
determining a respiratory waveform in the combined light intensity signal ([0078], [0088]);
correlating the respiratory waveform to a respiratory parameter ([0078], [0088]);
displaying the respiratory parameter on a display for monitoring of the patient (See Claim 1 Rejection, processed signals are output for display).
Regarding Claim 10, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 9, wherein correlating the second change to the respiratory parameter comprises:
obtaining a second pattern in the second change in profile of the surface over time (See Claim 9 Rejection¸ Jacquel: [0089]-[0090] the light intensity changes reflect a pattern in the change in profile of the surface over time); and
correlating the second pattern to the respiratory parameter (See Claim 9 Rejection).
Regarding Claim 15, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, the method further comprising: determining a respiratory parameter of the patient using depth information ([0165], [0168] the non-contact heart rate sensing can use the same image data to perform respiratory sensing, [0169] where the dot light feature’s measurement can limit image sensing to depth information).
Regarding Claim 21, while Watanabe teaches a method of monitoring heart rate of a patient through non-contact imaging (Abstract, Fig. 2, [0158]-[0161]), comprising:
projecting, by a projector, infrared light over a region of interest (ROI) of a patient, wherein the projector does not contact the patient (Fig. 2, [0159] near-infrared dot pattern / light feature projected by a light source 1 over a region of interest / living-body surface of the patient, wherein the Figure shows the projector does not contact the patient);
acquiring, by a camera, an image stream of the ROI over time, the image stream comprising reflections of the projected infrared light ([0158], [0161], [0163]-[0164]);
identifying, by a processor, a plurality of individual light intensity signals within the image stream, each individual light intensity signal of the plurality of individual light intensity signals comprising reflectance over time at a location in the ROI ([0158], [0161], [0163]-[0164]), and wherein the reflectance is a measure of reflected light intensity that is independent of color change ([0161], [0163], [0164]);
producing a combined light intensity signal by combining the plurality of individual light intensity signals ([0164]);
deriving a heart rate of the patient from a timing of the peaks ([0164] deriving heart rate of the patient by finding the period or frequency from the data, i.e. finding the timing of peaks); and
displaying the derived heart rate for monitoring of the patient ([0163] processing results of the system displayed), Watanabe fails to teach
aligning a phase of the plurality of individual light intensity signals;
producing a combined light intensity signal by combining the aligned plurality of individual light intensity signals
identifying peaks in the combined light intensity signal.
However Jacquel teaches a video-based monitoring of vital signs (Abstract, Fig. 2A, [0051], [0057], [0065]-[0066], [0068], [0070]-[0073], [0078], [0090]) comprising
acquiring an image stream of a ROI over time, the image stream comprising reflections of the subject (Fig. 2A, [0051], [0057], [0065]-[0066], [0070]-[0073], [0078] acquire an image stream of a region of interest over time, considers light intensity changes reflected from a patient);
measuring, from the image stream, a first reflected light intensity from the reflected light at a first location in the ROI ([0068] the head region 314 is recognized as the region of interest, [0051], [0057], [0078] measuring light intensity from light reflected off the subject at a region of interest of the head with a sub-region / first location of a forehead and a sub-region / second location of a cheek),
aligning phases of various pluralities of individual light intensity signals ([0072] the pixels within a sub-region, such as the forehead, may be further divided into sub-regions, i.e. forehead regions 1A, 2A, 3A. The component pixels are combined together to produce a combined/summed/weighted average of the sub-region, so the pixels of the plurality of individual light intensity signal of a sub-region are combined together reflecting a new singular average intensity signal of said sub-region. In doing so, the phase of the disparate light intensity signals have been aligned by coalescing into a single signal, [0090] confirms performing this step for sub-regions of the head);
producing a combined light intensity signal by combining the plurality of individual light intensity signals, where the plurality of individual light signals comprises subsets of aligned light intensity signals ([0072]-[0073], [0090] the aligned plurality of individual light signals for a first sub-region of the forehead and the aligned plurality of individual light signals for a second sub-region of the cheek are combined, the produced combined light intensity signal from combining various aligned pluralities of individual light intensity signals)
identifying peaks in the combined light intensity signal ([0074], [0090]-[0091] the processing of non-contiguous sub-regions for a vital sign may comprise finding a median frequency peak or a pulse recognition algorithm from signal maxima to identify a heart rate),
deriving a heart rate of the patient from a timing of the peaks ([0074], [0090]-[0091]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further perform light-intensity-based non-contact video monitoring of a subject as taught by Watanabe at a second relevant location as taught by Jacquel as this provides increased accuracy by cross-referencing a predicting modulation of heart rate across the face, ruling out unrelated changes in the face due to the influence of, for example, the autonomic nervous system. This supports teachings of Watanabe with Jacquel providing clear steps on how to achieve the improved accuracy. Finally, it would be obvious to perform the weighted average determination as an alignment step and output a single modulation signal as taught by Jacquel as a standardization step for estimating the single modulation in light intensity signals that most represents the physiological signals, ensuring consistency across applications of the invention.
Yet their combined efforts fail to specifically teach
Aligning a phase of the plurality of individual light intensity signals, and
Producing a combined light intensity signal by combining the aligned plurality of individual light intensity signals.
However Gunther teaches an imaging-based physiological monitor (Abstract, [0040], [0042], [0078]),
The imaging is performed on separate regions ([0053]-[0056] different forms of dividing subject images into regions) and pixel intensities at different locations on the body may show inverted phases for the same physiological phenomenon ([0059], [0063]-[0064]);
Accounting for a phase difference between light intensity signals ([0063]);
Producing a combined light intensity signal by combining the corrected plurality of individual light intensity signals ([0063]-[0064] after coefficient correction and summation, you will output a final breathing signal 183); and
correlating the shared modulation to a physiological parameter ([0063]-[0064]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to perform a preliminary aligning step as taught by Gunther before identifying a modulation in the various light intensity signals by averaging taught in Jacquel as different regions of the imaged subject may show an exact opposite phase for the same physiological phenomenon (Gunther: [0063]). Thus, a summation of regions in Jacquel could cancel out the modulation from a physiological signal, rather than emphasize it. So, an aligning step of the entirety of the plurality of individual light intensity signals is beneficial to ensure all data may be combined optimally. Further, while it is not stated explicitly how the coefficients account for phases differences, aligning phases through the use of correcting coefficients would be familiar to those of ordinary skill in the imaging-based physiological monitoring art (Halter: [0083]-[0084]).
Regarding Claim 22, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 21, wherein the projected infrared light comprises a pattern of projected infrared light (See Claim 21 Rejection, a pattern of projected near-infrared light in the form of a dot array, where near infrared light is understood to be a subset of infrared light as noted by Applicant in paragraph [0035] of the Specification dated 1/31/2022).
Regarding Claim 23, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 22, wherein the pattern comprises a grid (See Claim 22 Rejection) and Jacquel further teaches wherein each individual light intensity signal of the plurality of individual light intensity signals comprises a sum of light intensities within a box of the grid ([0072]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further perform light-intensity-based non-contact video monitoring of a subject taught by Watanabe and have the light intensity signal representing multiple pixels be a summation as taught by Jacquel instead of an average as a simple substitution of one form of isolating modulations in the light data for another to obtain predictable results of identified vital parameters.
Regarding Claim 24, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 23, and Watanabe further teaches wherein the combined light intensity signal comprises a combination of light intensities of multiple boxes of the grid (See Claim 23 Rejection, and [0068] the head region 314 is recognized as the region of interest, [0051], [0057], [0078] measuring light intensity from light reflected off the subject at a region of interest of the head with a sub-region / first location and a sub-region / second location. Non-contiguous regions are different “boxes”).
Regarding Claim 25, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 21, and Jacquel further teaches the method comprising:
identifying a respiratory waveform in the combined light intensity signal;
deriving a respiration rate from the respiratory waveform (See Claim 21 Rejection¸ Jacquel [0074]); and
displaying the respiration rate for monitoring of the patient (See Claim 21 Rejection, Watanabe displays products of processing).
Claim(s) 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe in view of Jacquel and further in view of Gunther and further in view of Halter and further in view of Watanabe (US 2018/0153422) (“Watanabe 2”).
Regarding Claim 7, Watanabe, Jacquel, Gunther, and Halter teach the method of claim 1, wherein determining the cardiac parameter using reflected light intensity information comprises: measuring the first reflected light intensity and the second reflected light intensity from the light feature with a camera (See Claim 1 Rejection), teaches other embodiments measuring in stereo with cameras ([0259]), their combined efforts fail to teach measuring multiple reflected light intensities in stereo with a first camera and a second camera.
However Watanabe teaches a non-contact vital signs monitor (Abstract, [0049]-[0052], [0057]) where a first and second light intensity of a region is measured ([0049]), where the multiple reflected light intensities are measured in stereo with a first camera and a second camera (Fig. 1A, [0057]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to perform light-intensity-based non-contact video monitoring of a subject as taught by Watanabe by measuring the first and second intensity with a first and second camera in stereo as taught by Watanabe 2 as a stereo camera measuring of different light intensities can be configured to compensate for drawbacks in near-infrared light sensing ([0057]), a relevant concern of the near-infrared light measurements of Watanabe.
Regarding Claim 8, Watanabe, Jacquel, Gunther, Halter, and Watanabe 2 teach the method of claim 7, wherein measuring with the first camera and the second camera comprises:
comparing the first reflected light intensity measured by the first camera and the second camera to the second reflected light intensity measured by the first camera and the second camera (See Claim 7 Rejection, comparison to provide reference data for near infrared light sensing).
Response to Arguments
Applicant’s amendments and arguments filed 10/31/2025 with respect to the 35 USC 101 rejections have been fully considered, but are not persuasive. Examiner respectfully disagrees for the same reasons given above and in the previous Final Rejection dated 9/05/2025 and the Advisory Action dated 11/18/2025. The rejection stands.
Applicant’s amendments and arguments filed 10/31/2025 with respect to the 35 USC 103 rejections have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Watanabe, Jacquel, Gunther, and Halter for Claim 1 and 21. The rejection stands.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAIRO H PORTILLO whose telephone number is (571)272-1073. The examiner can normally be reached M-F 9:00 am - 5:15 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jacqueline Cheng can be reached at (571)272-5596. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAIRO H. PORTILLO/
Examiner
Art Unit 3791
/JACQUELINE CHENG/Supervisory Patent Examiner, Art Unit 3791