Prosecution Insights
Last updated: April 19, 2026
Application No. 17/486,981

Imaging Photoplethysmography (IPPG) System and Method for Remote Measurements of Vital Signs

Final Rejection §103
Filed
Sep 28, 2021
Examiner
CROCKETT, JOSHUA BRIGHAM
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Mitsubishi Electric Research Laboratories Inc.
OA Round
5 (Final)
72%
Grant Probability
Favorable
6-7
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
13 granted / 18 resolved
+10.2% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Claim 19 has been amended. Claim 5 was canceled previously. Claims 1-4 and 6-20 are pending in this action. Applicant's arguments, see pg. 8-9, filed 9 January 2026, with regard to the rejections of claims 1-20 under 35 U.S.C. 103 have been fully considered but they are not persuasive. The applicant argues that Comas et al. ("Turnip: Time-Series U-Net With Recurrence For NIR Imaging PPG," full reference on PTO-892 included with the action filed 8 December 2025; hereafter, Comas) does not qualify as prior art under 35 U.S.C. 102(a)(1) because it was published after the effective filing date of the claimed invention, namely after 26 August 2021 by claiming priority to U.S. Provisional Application No. 63/237,347. The applicant therefore argues that claims 1 and 20 are not taught by prior art and has amended claim 19 to be significantly similar to claim 1. The examiner disagrees. The examiner respectfully calls attention to pg. 6 of the pdf document of Comas filed with the previous action, which states "Date Added to IEEE Xplore: 23 August 2021". [AltContent: rect] PNG media_image1.png 182 594 media_image1.png Greyscale Referring to documentation provided by iEEE Xplore on their website, iEEE states that the "Date Added to iEEE Xplore" is the "date proceedings were first published", see "Working with Documents - Publication Dates" (the full reference is on PTO-892 included with this action). [AltContent: connector][AltContent: connector] PNG media_image2.png 238 1016 media_image2.png Greyscale Therefore, the publication date of Comas is 23 August 2021 which is before the effective filing date of the claims invention, 26 August 2021, and Comas qualifies as prior art under 35 U.S.C. 102(a)(1). Therefore, claims 1 and 20 remain rejected under 35 U.S.C. 103. As claim 19 has been amended similarly to claim 1 it is similarly rejected below in the section “Claim Rejections - 35 USC § 103”. The examiner notes that Comas has the following authors in common with the inventors of the instant publication: Tim Marks, Hassan Mansour, and Suhas Lohit. Comas contains the following authors who are not listed as inventors of the instant publication: Armand Comas, Yechi Ma, and Xiaoming Liu. Because of the common inventors, the applicant may argue that Comas does not qualify as prior art because "AIA 35 U.S.C. 102(b)(1)(A) first provides that a disclosure which would otherwise qualify as prior art under AIA 35 U.S.C. 102(a)(1) is excepted as prior art if the disclosure is made: (1) one year or less before the effective filing date of the claimed invention; and (2) by the inventor or a joint inventor or by another who obtained the subject matter disclosed directly or indirectly from the inventor or a joint inventor (i.e., an inventor-originated disclosure). Thus, a disclosure that would otherwise qualify as prior art under AIA 35 U.S.C. 102(a)(1) may not be used as prior art by Office personnel if the disclosure is made one year or less before the effective filing date of the claimed invention, and the evidence shows that the disclosure is an inventor-originated disclosure." MPEP 2153.01(a). However, in the case that a patent application has fewer joint inventors than a publication, the MPEP states "If, however, the application names fewer joint inventors than a publication (e.g., the application names as joint inventors A and B, and the publication names as authors A, B and C), it would not be readily apparent from the publication that it is an inventor-originated disclosure and the publication would be treated as prior art under AIA 35 U.S.C. 102(a)(1) unless there is evidence of record that an exception under AIA 35 U.S.C. 102(b)(1) applies." MPEP 2153.01(a). Therefore, as Comas names authors who are not listed as inventors of the instant application it is not readily apparent that the exception under AIA 35 U.S.C. 102(b)(1) applies and Comas may be used as prior art. If the applicant believes that the exception under AIA 35 U.S.C. 102(b)(1) applies to Comas, the applicant must provide an affidavit as described in MPEP 2155.01, "An applicant may show that a disclosure was made by the inventor or a joint inventor by way of an affidavit or declaration under 37 CFR 1.130(a) (an affidavit or declaration of attribution). See In re Katz, 687 F.2d 450, 455, 215 USPQ 14, 18 (CCPA 1982) and MPEP § 717.01(a)(1) . Where the authorship of the prior art disclosure includes the inventor or a joint inventor named in the application, an unequivocal statement from the inventor or a joint inventor that the inventor or joint inventor (or some combination of named inventors) invented the subject matter of the disclosure, accompanied by a reasonable explanation of the presence of additional authors, may be acceptable in the absence of evidence to the contrary. See In re DeBaun, 687 F.2d 459, 463, 214 USPQ 933, 936 (CCPA 1982)." Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-9, 13-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Comas et al. ("Turnip: Time-Series U-Net With Recurrence For NIR Imaging PPG," full reference on PTO-892 included with the action filed 8 December 2025; hereafter, Comas) in view of McDuff (US 20200121256 A1; hereafter, McDuff). Regarding claim 1, Comas discloses: An imaging photoplethysmography (iPPG) system for estimating a vital sign of a person from images of a skin of the person (pg. 309 col. 1 para. 2, the paper is directed to a system for estimating a pulse signal by iPPG), comprising: receive a sequence of images of different regions of the skin of the person (pg. 310 col. 2 para. 3, a video is received and 48 regions are extracted corresponding regions of the skin on the face), each region including pixels of different intensities indicative of variation of coloration of the skin (pg. 310 col. 2 para. 3, the regions are averaged. Averaging the regions shows that the pixels within each region have varying intensity which would be of the skin in the region); transform the sequence of images into a multidimensional time-series signal, each dimension corresponding to a different region from the different regions of the skin (pg. 310 col. 2 para. 3, a 48 dimensional time series signal is extracted from the series of images, i.e. the video, with each dimension corresponding to one of the 48 skin regions); process the multidimensional time-series signal with a time-series U-Net neural network to generate a PPG waveform (pg. 311 col. 1 para. 2, the multidimensional time-series signal is input into the PPG network and the PPG signal is determined), wherein a U-shape of the time-series U-Net neural network includes a contractive path that includes a sequence of contractive layers followed by an expansive path that includes a sequence of expansive layers, wherein at least some of the contractive layers downsample their input and at least some of the expansive layers upsample their input forming pairs of contractive and expansive layers of corresponding resolutions (pg. 311 col. 1 para. 1 and Fig. 1, the TURNIP network is a U-net structure which has a contractive or encoding pathway and an expansive or decoding pathway comprised of layers which respectively downsample their input and upsample their input), wherein an output of each contractive layer of a plurality of contractive layers in the sequence of contractive layers is connected to (i.) a corresponding next contractive layer in the sequence of contractive layers (Fig. 1, the output of the contractive layers is input into the next contractive layer) and to (ii.) a corresponding expansive layer in the sequence of expansive layers via a corresponding pass through layer (Fig. 1, the output of the contractive layers is sent through the pass-through layer to the expansive layers), wherein at least some of the corresponding contractive layers and expansive layers are connected through pass-through layers (pg. 311 col. 1 para. 2 and Fig. 1, the contractive and expansive layers are connected with pass through layers, i.e. skip connections, which are shown explicitly in Fig. 1), wherein at least one of the pass-through layers includes a recurrent neural network that processes its input sequentially (pg. 311 col. 1 para. 2 and Fig. 1, the skip connections include a recurrent network), and wherein the recurrent neural network is distinct from the contractive layers and expansive layers of the U-Net (pg. 311 col. 1 para. 2 and Fig. 1, the recurrent network is in the skip connection and is distinct from both the contractive path and the expansive path); estimate the vital sign of the person based on the PPG waveform (pg. 309 col. 1 para. 2, the heartbeat waveform is estimated from the PPG waveform. See also pg. 311 col. 1 para. 2, "It then estimates the desired PPG signal in a deterministic way."); and render the estimated vital sign of the person (pg. 312 col. 2 para. 3 and Fig. 3, the heartbeat waveform is rendered in figure 3). Comas does not disclose expressly a processor and a memory for storing instructions. McDuff discloses: at least one processor; and a memory having instructions stored thereon (description of a processor and memory with instructions contained in paragraph [0005], description of acceptable computer readable media [0044]) Comas and McDuff are combinable because they are from the same field of endeavor of determining heart rate from PPG data (Comas, pg. 309 col. 1 para. 2; McDuff, [0012]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the processor and memory of McDuff with the method and system of Comas. The motivation for doing so would have been that it is a combination of previously known elements, the method of Comas and the processor and memory of McDuff, in a known manner, it is well known to use a processor and memory to perform a process of image processing, to yield a predictable result, image processing enabled by a processor and memory. Further, a person of ordinary skill in the art would understand that the system of Comas is performed on a processor with a memory although Comas does not disclose these elements expressly. Therefore, it would have been obvious to combine McDuff with Comas to obtain the invention as specified in claim 1. Regarding claim 2, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein at least one contractive layer from the sequence of contractive layers downsamples its input using a strided convolution with a stride greater than 1 to downsample and process the input (Fig. 3, at least one of the contractive layers performs convolution with a stride greater than 1, see a stride of 3 and a stride of 2). Regarding claim 3, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein at least one expansive layer from the sequence of expansive layers upsamples its input with an up-convert operation to produce an upsampled input (Fig. 1, the expansive path upsamples the input for each layer), and wherein the expansive layer includes multiple convolutional layers processing the upsampled input (Fig. 1, the figure shows three expansive layers which is multiple. The legend of the image shows that the layers include convolution). Regarding claim 4, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein the recurrent neural network includes a gated recurrent unit (GRU) (pg. 311 col. 1 para. 2, the recurrent network is a GRU) or a long short-term memory (LSTM) network. Regarding claim 6, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein to estimate the vital sign of the person from the PPG waveform, the at least one processor is configured to process, with the time-series U-Net neural network, each segment from a sequence of overlapping segments of the multidimensional time- series signal (pg. 310 col. 2 para. 4, a ten second window is used to input into the PPG estimator network. Each 10 second window adds a half a second to the beginning of the window which is understood as overlapping the window with the previous window). Regarding claim 7, Comas in view of McDuff discloses the subject matter of claim 6. Comas further discloses: The iPPG system of claim 6, wherein a signal of the vital sign of the person is a one-dimensional signal (Fig. 1 the output of the vital sign signal is 1 channel which is understood as a one-dimensional signal). Regarding claim 8, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein to produce the multidimensional time-series signal, the at least one processor is configured to identify the different regions of the skin of the person using a facial landmark detection (pg. 310 col. 2 para. 3, facial landmarks are used to determine the 48 regions); and average pixel intensities of pixels from each region of the different regions at an instant of time to produce a value for each dimension of the multidimensional time-series signal at the instant of time (pg. 310 col. 2 para. 3, the pixels within each region are averaged). Regarding claim 9, Comas in view of McDuff discloses the subject matter of claim 8. Comas further discloses: The iPPG system of claim 8, wherein each dimension of the multidimensional time-series signal is a signal corresponding to the corresponding region of the different regions of the skin, wherein each region is an explicitly tracked region of interest (ROI) (pg. 310 col. 2 para. 3, the regions are created by facial landmarks which is understood as tracking the regions of interest as the facial landmarks are tracked between frames). Regarding claim 13, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein the time-series U-net neural network is trained to maximize a Pearson correlation coefficient between ground truth data associated with the PPG waveform and the estimated PPG signal (pg. 311 col. 1 para. 4, the training is performed by maximizing the Pearson correlation coefficient between the ground truth and the estimated PPG signal). Regarding claim 14, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein the time-series U-net neural network is trained with a temporal loss function or a spectral loss function (pg. 311 col. 1 para. 4, training may be performed with a temporal loss function or a spectral loss function). Regarding claim 15, Comas in view of McDuff discloses the subject matter of claim 1. Comas further discloses: The iPPG system of claim 1, wherein the vital sign is one or a combination of a pulse rate of the person (pg. 309 col. 1 para. 2, the system detects a heart rate) and a heart rate variability of the person. Regarding claim 18, Comas in view of McDuff discloses the subject matter of claim 1. Comas does not disclose expressly a camera including a processor configured to measure the intensities of coloration of the skin. McDuff discloses: The iPPG system of claim 1, further comprising: a camera including a processor configured to measure the intensities indicative of variation of coloration of the skin at different instants of time to produce the sequence of images ([0013], describing that the camera may be a variety of camera options including RGB and infrared. The camera may be a computer, laptop, or smart phone camera. A person having ordinary skill in the art would understand a computer, laptop, or smart phone to include a processor. Therefore, the camera includes a processor. Additionally, Fig. 5 discloses a processor 502 of the system 500), a display device configured to display the signal of the vital sign of the person (([0042], stating that the processor may display the processed signals on a computer system). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the camera and display of McDuff with the invention of Comas. The motivation for doing so would have been to obtain the video data (McDuff, “The video frame sequence 108 may be obtained from a remote camera,” [0013]). Therefore, it would have been obvious to combine McDuff with Comas to obtain the invention as specified in claim 18. Regarding claim 19, Comas discloses: A method for estimating a vital sign of a person (pg. 309 col. 1 para. 2, the paper is directed to a system for estimating a pulse signal by iPPG), comprising: receiving a sequence of images of different regions of the skin of the person (pg. 310 col. 2 para. 3, a video is received and 48 regions are extracted corresponding regions of the skin on the face), each region including pixels of different intensities indicative of variation of coloration of the skin (pg. 310 col. 2 para. 3, the regions are averaged. Averaging the regions shows that the pixels within each region have varying intensity which would be of the skin in the region); transforming the sequence of images into a multidimensional time-series signal, each dimension corresponding to a different region from the different regions of the skin (pg. 310 col. 2 para. 3, a 48 dimensional time series signal is extracted from the series of images, i.e. the video, with each dimension corresponding to one of the 48 skin regions); processing the multidimensional time-series signal with a time-series U-Net neural network to generate a PPG waveform (pg. 311 col. 1 para. 2, the multidimensional time-series signal is input into the PPG network and the PPG signal is determined), wherein a U-shape of the time-series U-Net neural network includes a contractive path that includes a sequence of contractive layers followed by an expansive path that includes a sequence of expansive layers, wherein at least some of the contractive layers downsample their input and at least some of the expansive layers upsample their input forming pairs of contractive and expansive layers of corresponding resolutions (pg. 311 col. 1 para. 1 and Fig. 1, the TURNIP network is a U-net structure which has a contractive or encoding pathway and an expansive or decoding pathway comprised of layers which respectively downsample their input and upsample their input), wherein an output of each contractive layer of a plurality of layers of the sequence of contractive layers is connected to (i.) a corresponding next contractive layer in the sequence of contractive layers (Fig. 1, the output of the contractive layers is input into the next contractive layer) and to (ii.) a corresponding expansive layer in the sequence of expansive layers via a corresponding pass through layer (Fig. 1, the output of the contractive layers is sent through the pass-through layer to the expansive layers), wherein at least some of the corresponding contractive layers and expansive layers are connected through pass-through layers (pg. 311 col. 1 para. 2 and Fig. 1, the contractive and expansive layers are connected with pass through layers, i.e. skip connections, which are shown explicitly in Fig. 1), wherein at least one of the pass-through layers includes a recurrent neural network that processes its input sequentially (pg. 311 col. 1 para. 2 and Fig. 1, the skip connections include a recurrent network), and wherein the recurrent neural network is distinct from the contractive layers and expansive layers of the U-Net (pg. 311 col. 1 para. 2 and Fig. 1, the recurrent network is in the skip connection and is distinct from both the contractive path and the expansive path); estimating the vital sign of the person based on the PPG waveform (pg. 309 col. 1 para. 2, the heartbeat waveform is estimated from the PPG waveform. See also pg. 311 col. 1 para. 2, "It then estimates the desired PPG signal in a deterministic way."); and rendering the estimated vital sign of the person (pg. 312 col. 2 para. 3 and Fig. 3, the heartbeat waveform is rendered in figure 3). Comas does not disclose expressly a processor and a memory for storing instructions. McDuff discloses: wherein the method uses a processor coupled with stored instructions implementing the method, wherein the instructions, when executed by the processor carry out steps of the method (description of a processor and memory with instructions contained in paragraph [0005], description of acceptable computer readable media [0044]) It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the processor and memory of McDuff with the method and system of Comas. The motivation for doing so would have been that it is a combination of previously known elements, the method of Comas and the processor and memory of McDuff, in a known manner, it is well known to use a processor and memory to perform a process of image processing, to yield a predictable result, image processing enabled by a processor and memory. Further, a person of ordinary skill in the art would understand that the system of Comas is performed on a processor with a memory although Comas does not disclose these elements expressly. Therefore, it would have been obvious to combine McDuff with Comas to obtain the invention as specified in claim 19. Regarding claim 20, Comas discloses: the method comprising: receiving a sequence of images of different regions of the skin of the person (pg. 310 col. 2 para. 3, a video is received and 48 regions are extracted corresponding regions of the skin on the face), each region including pixels of different intensities indicative of variation of coloration of the skin (pg. 310 col. 2 para. 3, the regions are averaged. Averaging the regions shows that the pixels within each region have varying intensity which would be of the skin in the region); transforming the sequence of images into a multidimensional time-series signal, each dimension corresponding to a different region from the different regions of the skin (pg. 310 col. 2 para. 3, a 48 dimensional time series signal is extracted from the series of images, i.e. the video, with each dimension corresponding to one of the 48 skin regions); processing the multidimensional time-series signal with a time-series U-Net neural network to generate a PPG waveform (pg. 311 col. 1 para. 2, the multidimensional time-series signal is input into the PPG network and the PPG signal is determined), wherein a U-shape of the time-series U-Net neural network includes a contractive path that includes a sequence of contractive layers followed by an expansive path that includes a sequence of expansive layers, wherein at least some of the contractive layers downsample their input and at least some of the expansive layers upsample their input forming pairs of contractive and expansive layers of corresponding resolutions (pg. 311 col. 1 para. 1 and Fig. 1, the TURNIP network is a U-net structure which has a contractive or encoding pathway and an expansive or decoding pathway comprised of layers which respectively downsample their input and upsample their input), where an output of each contractive layer of a plurality of contractive layers in the sequence of contractive layers is connected to (i.) a corresponding next contractive layer in the sequence of contractive layers (Fig. 1, the output of the contractive layers is input into the next contractive layer) and to (ii.) a corresponding expansive layer in the sequence of expansive layers via a corresponding pass through layer (Fig. 1, the output of the contractive layers is sent through the pass-through layer to the expansive layers), wherein one or more contractive layers of the sequence of contractive layers comprise one or more of a convolutional layer, a single downsampling convolutional layer, and a dropout layer (pg. 311 col. 1 para. 2, the network extracts convolutional features and downsamples the input. This is understood as at least a convolutional layer and a single downsampling convolutional layer), wherein at least one of the pass-through layers includes a recurrent neural network that processes its input sequentially (pg. 311 col. 1 para. 2 and Fig. 1, the skip connections include a recurrent network), and wherein the recurrent neural network is distinct from the contractive layers and expansive layers of the U-Net (pg. 311 col. 1 para. 2 and Fig. 1, the recurrent network is in the skip connection and is distinct from both the contractive path and the expansive path); estimating the vital sign of the person based on the PPG waveform (pg. 309 col. 1 para. 2, the heartbeat waveform is estimated from the PPG waveform. See also pg. 311 col. 1 para. 2, "It then estimates the desired PPG signal in a deterministic way."); and rendering the estimated vital sign of the person (pg. 312 col. 2 para. 3 and Fig. 3, the heartbeat waveform is rendered in figure 3). Comas does not disclose expressly a non-transitory computer readable medium storing a program executable by a processor. McDuff discloses: A non-transitory computer-readable storage medium embodied thereon a program executable by a processor for performing a method ([0044] Instructions, i.e. a program, may be stored on a non-transitory computer readable medium to be executed by a processor), It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the processor and memory of McDuff with the method and system of Comas. The motivation for doing so would have been that it is a combination of previously known elements, the method of Comas and the processor and memory of McDuff, in a known manner, it is well known to use a processor and memory to perform a process of image processing, to yield a predictable result, image processing enabled by a processor and memory. Further, a person of ordinary skill in the art would understand that the system of Comas is performed on a processor with a memory although Comas does not disclose these elements expressly. Therefore, it would have been obvious to combine McDuff with Comas to obtain the invention as specified in claim 20. Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Comas et al. ("Turnip: Time-Series U-Net With Recurrence For NIR Imaging PPG," full reference on PTO-892 included with the action filed 8 December 2025; hereafter Comas) in view of McDuff (US 20200121256 A1; hereafter, McDuff) in further view of MATLAB Answers ("how to split channels of a video", 2019; hereafter M1), MathWorks Help Center (“Multidimensional Array”, 2012; hereafter M2), MathWorks Help Center ("imapplymatrix", 2018; hereafter M3), and MathWorks Help Center ("cell", 2020; hereafter M4). Regarding claim 10, Comas in view of McDuff discloses the subject matter of claim 1. Comas in view of McDuff does not disclose explicitly extraction of a multidimensional time-series signal from a multi-channel video. M1 discloses: more than one multidimensional time-series signal each extracted from a different channel of a multi-channel video (p. 1-2, post by KSSV containing directions for extracting channels from a multi-channel video). M1 is combinable with Comas in view of McDuff because it solves a related problem of handling multidimensional arrays (M1, p. 1, Awais Khan shown to ask how to split the channels of a video). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the channel extraction taught by M1 with Comas in view of McDuff. The motivation would have been that extracting a time-series signal from a channel of a multi-channel video allows for consideration of each channel individually (M1, p. 1, post by Awais Khan, per their question). Therefore, it would have been obvious to combine M1 with Comas in view of McDuff. Comas in view of McDuff in further view of M1 does not disclose combining more than one multidimensional time-series signal by concatenation. M2 discloses: wherein to transform the sequence of images into a multidimensional time-series signal includes a concatenation operation that combines more than one multidimensional time-series (p. 2, heading “Building Multidimensional Arrays with the cat Function” discloses how to combine more than one multidimensional array), each extracted from a different channel of a multi-channel video (taught by M1 as shown above), into a single multidimensional time series that comprises the multidimensional time-series signal (p.2, heading “Building Multidimensional Arrays with the cat Function” shows how to combine multidimensional arrays). M2 is combinable with Comas in view of McDuff in further view of M1 because it solves the related problem of handling multidimensional arrays (p.1, “Overview,” defines multidimensional arrays. The remainder of the webpage is directed to working with multidimensional arrays in a variety of ways). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the use of multidimensional arrays taught by M2 with Comas in view of McDuff in further view of M1. The motivation for doing so would have been that using multidimensional arrays allows a user to be able to perform “most of the operations that you can perform on matrices” making processing and computing simpler (M2, p. 1, heading “Overview”). Therefore, it would have been obvious to combine M2 with Comas in view of McDuff in further view of M1 to obtain the invention as specified in claim 10. Regarding claim 11, Comas in view of McDuff discloses the subject matter of claim 1. Comas in view of McDuff does not disclose explicitly extraction of a multidimensional time-series signal from a multi-channel video. M1 discloses: more than one multidimensional time-series signal each extracted from a different channel of a multi-channel video (p. 1-2, post by KSSV containing directions for extracting channels from a multi-channel video). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the channel extraction taught by M1 with Comas in view of McDuff. The motivation would have been that extracting a time-series signal from a channel of a multi-channel video allows for consideration of each channel individually (M1, p. 1, post by Awais Khan, per their question). Therefore, it would have been obvious to combine M1 with Comas in view of McDuff. Comas in view of McDuff in further view of M1 does not disclose combining more than one multidimensional time-series signal by linear combination. M3 discloses: wherein to transform the sequence of images into a multidimensional time-series signal includes a linear combination that combines more than one multidimensional time series (p. 1, heading “Description,” describing the function of linear combination), each extracted from a different channel of a multi-channel video (taught by M1 as shown above), into a single multidimensional time series that comprises the multidimensional time-series signal (p. 1, heading “Description”, describing the linear combination that occurs in the function). M3 is combinable with Comas in view of McDuff in further view of M1 because it solves a related problem of handling multidimensional arrays (p. 1, subtitle directing the subject matter to “Linear combination of color channels”, disclosing multidimensional array handling). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine linear combination taught by M3 with Comas in view of McDuff in further view of M1. The motivation for doing so would have been that the linear combination of the multidimensional channels of a multi-channel video allows for converting the image into different representations, such as grayscale (M3, p. 1, image showing conversion from color to grayscale). Depending on the desired calculation, linear combination is a useful tool to convert the data. Therefore, it would have been obvious to combine M3 with Comas in view of McDuff in further view of M1 to obtain the invention as specified in claim 11. Regarding claim 12, Comas in view of McDuff discloses the subject matter of claim 1. Comas in view of McDuff does not disclose explicitly extraction of a multidimensional time-series signal from a multi-channel video. M1 discloses: more than one multidimensional time-series signal each extracted from a different channel of a multi-channel video (p. 1-2, post by KSSV containing directions for extracting channels from a multi-channel video). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the channel extraction taught by M1 with Comas in view of McDuff. The motivation would have been that extracting a time-series signal from a channel of a multi-channel video allows for consideration of each channel individually (M1, p. 1, post by Awais Khan, per their question). Therefore, it would have been obvious to combine M1 with Comas in view of McDuff. Comas in view of McDuff in further view of M1 does not disclose shaping more than one multidimensional time-series signal into a 3D array. M4 discloses: wherein to transform the sequence of images into a multidimensional time-series signal includes extracting more than one multidimensional time series, each extracted from one channel of a multi-channel video (taught by M1 as shown above), and shaping the more than one multidimensional time series into a 3D array that comprises the multidimensional time-series signal (p. 2, heading “3-D Cell Array,” giving instructions for creating a 3-D array). M4 is combinable with Comas in view of McDuff in further view of M1 because it solves a similar problem of handling multidimensional arrays (p.1, “Description” describing how a cell array may contain any type of data and array of varying sizes, solving many problems related to handling multidimensional arrays). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the 3D array of M4 with Comas in view of McDuff in further view of M1. The motivation for doing so would have been that, as taught by M4, various data types and array sizes can be stored in a 3-D array, each dimension corresponding to a different series (M4, p. 1, heading “Description” stating different types of data can be stored and p. 2, heading “3-D Cell Array,” giving instructions for creating a 3-D array). Storing data into one array allows for simple manipulation and storage of the array. Therefore, it would have been obvious to combine M4 with Comas in view of McDuff in further view of M1 to obtain the invention as specified in claim 12. Claims 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Comas et al. ("Turnip: Time-Series U-Net With Recurrence For NIR Imaging PPG," full reference on PTO-892 included with the action filed 8 December 2025; hereafter Comas) in view of McDuff (US 20200121256 A1; hereafter, McDuff) in further view of Hutchison (U.S. Publ. No. 20180307927-A1). Regarding claim 16, Comas in view of McDuff discloses the subject matter of claim 1. Comas in view of McDuff does not disclose explicitly a system wherein the person is a driver of a vehicle and wherein a processor produces commands for a controller. Hutchison discloses: The iPPG system of claim 1, wherein the person corresponds to a driver of a vehicle (Figure 1, showing the person driving a car), and wherein the at least one processor is further configured to produce one or more control commands for a controller of the vehicle based on the vital sign of the driver (Figure 2, the “Image Processor” shown to communicate with the “Controller”). Hutchison is combinable with Comas in view of McDuff because it is from the related field of endeavor of using a PPG signal to calculate the vital signs of a person (Hutchison, abstract, using a PPG signal to calculate vital signs of a driver of a vehicle). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine that the person is a driver and the processor may produce control commands of Hutchison with Comas in view of McDuff. The motivation for doing so would have been that monitoring the driver of a vehicle for signs of loss of alertness, drowsiness, or health problems may allow “an alarm can be generated to alert the operator or the controls of the vehicle may be operated to bring the vehicle to a safe condition” (Hutchison, [0003]). Therefore, it would have been obvious to combine Hutchison with Comas in view of McDuff to obtain the invention as specified in claim 16. Regarding claim 17, Comas in view of McDuff in further view of Hutchison discloses the subject matter of claim 16. Comas in view of McDuff does not disclose explicitly that a controller is configured to execute control actions of a vehicle. Hutchison discloses: The iPPG system of claim 16, further comprising: a controller configured to execute a control action based on the signal of the vital sign of the person (Figure 2, the “Controller” and [0031] stating that the output to the controller may “alert through the vehicle controls or to operate the vehicle to bring it into a safe condition”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the controller of Hutchison with Comas in view of McDuff. The motivation for doing so would have been monitoring the driver of a vehicle for signs of loss of alertness, drowsiness, or health problems may allow “an alarm can be generated to alert the operator or the controls of the vehicle may be operated to bring the vehicle to a safe condition” (Hutchison, [0003]). Therefore, it would have been obvious to combine Hutchison with Comas in view of McDuff to obtain the invention as specified in claim 17. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA B. CROCKETT/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Sep 28, 2021
Application Filed
Oct 18, 2024
Non-Final Rejection — §103
Feb 04, 2025
Response Filed
Mar 06, 2025
Non-Final Rejection — §103
May 12, 2025
Interview Requested
May 27, 2025
Examiner Interview Summary
Jun 12, 2025
Response Filed
Aug 05, 2025
Final Rejection — §103
Nov 07, 2025
Request for Continued Examination
Nov 15, 2025
Response after Non-Final Action
Dec 04, 2025
Non-Final Rejection — §103
Jan 09, 2026
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592060
ARTIFICIAL INTELLIGENCE DEVICE AND 3D AGENCY GENERATING METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12587704
VIDEO DATA TRANSMISSION AND RECEPTION METHOD USING HIGH-SPEED INTERFACE, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12567150
EDITING PRESEGMENTED IMAGES AND VOLUMES USING DEEP LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561839
SYSTEMS AND METHODS FOR CALIBRATING IMAGE SENSORS OF A VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12529639
METHOD FOR ESTIMATING HYDROCARBON SATURATION OF A ROCK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+27.5%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month