DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Drawings
The drawings filed on March 07, 2023 are accepted.
Claim Objections
Claims 1 is objected to because of the following informalities:
“the PPG signals are processed for to extract” is unclear. It appears “for” should be deleted.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1 of the subject matter eligibility test (see MPEP 2106.03).
Claim 1 is directed to “a system” which describes one of the four statutory categories of patentable subject matter, i.e. a machine or manufacture.
Each of Claims 1-20 has been analyzed to determine whether it is directed to any judicial exceptions.
Step 2A of the subject matter eligibility test (see MPEP 2106.04).
Prong One:
Claim 1 recites (“sets forth” or “describes”) the abstract idea of “mathematical concepts” (MPEP 2106.04(a)(2).I.), substantially as follows: “the reflectance images are segmented by the processor into several non-overlapping spatial segments, each segment comprising an array of pixels; the reflectance for each spatial segment over the pixels in the segment is averaged for each point in time; the time series of the average reflectance for each spatial segment is used to extract a set of PPG signals; the PPG signals are processed for to extract useful information about blood flow in tissues; and”
In claim 1, the above recited steps are mathematical concepts, which is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations. The Specification teaches that images may be used to compute the pulse characteristics/useful tissue information. Spec. page 12-22. Computing these based on the feature points encompasses the use of mathematical equations, which has been recognized as an abstract idea (i.e., a mathematical concept). Patent Eligibility Guidance, 84 Fed. Reg. at 52. In sum, we determine that Prong 1 recites a judicial exception, and proceed to Step 2A, Prong 2.
Therefore, each of the above steps are grouped as mathematical concepts, hence an abstract idea.
Claim 1 recites (“sets forth” or “describes”) the abstract idea of “a mental process” (MPEP 2106.04(a)(2).III.), substantially as follows: “the reflectance images are segmented by the processor into several non-overlapping spatial segments, each segment comprising an array of pixels; the reflectance for each spatial segment over the pixels in the segment is averaged for each point in time; the time series of the average reflectance for each spatial segment is used to extract a set of PPG signals; the PPG signals are processed for to extract useful information about blood flow in tissues; and”
In claim 1, the above recited steps can be practically performed in the human mind, with the aid of a pen and paper or with a generic computer, in a computer environment, or merely using the generic computer as a tool to perform the steps. If a person were to visually examine, i.e., perform an observation, the waveform data, either in a printout or an electronic format, he/she would be able to perform the calculations to obtain the useful information via pen and paper. There is nothing recited in the claim to suggest an undue level of complexity in how the waveforms, the peaks and the bio-information to be identified. Therefore, a person would be able to perform the identification of peaks mentally or with a generic computer.
Prong Two: Claim 1 does not include additional elements that integrate the mental process into a practical application.
This judicial exception is not integrated into a practical application. In particular, the claims recites (1) “a camera; the camera being located to be able to record reflectance images of a target skin area;”
(2) “a display device; the useful tissue information is communicated to a user.”.
(3) “a non-transitory computer-readable memory and a processor”.
The steps in (1) represent merely data gathering or pre-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality with conventionally used tools (see below Step IIB for further details).
The step in (2) represents merely notification outputting by a processor as a post-solution activity and is recited at a high level of generality.
The steps in (3) merely recite generic computer components used to implement the abstract idea on, as tools.
As a whole, the additional elements merely serve to gather and feed information to the abstract idea and to output a notification based on the abstract idea, while generically implementing it on conventionally used tools. There is no practical application because the abstract idea is not applied, relied on, or used in a meaningful way. No improvement to the technology is evident, and the estimated bio-information is not outputted in any way such that a practical benefit is realized. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application.
Step 2B of the subject matter eligibility test (see MPEP 2106.05).
Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claims recite additional steps of (1) “a camera; the camera being located to be able to record reflectance images of a target skin area;”
(2) “a display device; the useful tissue information is communicated to a user.”.
(3) “a non-transitory computer-readable memory and a processor”.
These steps represents mere data gathering, data outputting or pre/post/extra-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality.
The vibration information is obtained from a fiber-optic sensor. These additional limitations merely represent insignificant, conventional pre-solution activities well-understood in the industry of camera based pulse wave estimation, as the sensors recited are well understood, routine and conventional, as evidenced by Yoshizawa et al. (US 2020/0288996 A1) (“Yoshizawa”) as mapped below with reference to claim 1.
Accordingly, these additional steps and tools for measuring a pulse wave signal, and outputting a notification amount to no more than insignificant conventional extra-solution activity. Mere insignificant conventional extra-solution activity cannot provide an inventive concept.
The recited processors and computer-readable storage medium are generic computer elements (i.d. para. [0021 - 0022] [0073] describing generic computers).
Therefore, claim 1 does not amount to significantly more than the abstract idea itself.
Accordingly, Claim 1 is not patent eligible and rejected under 35 U.S.C. 101 as being directed to abstract ideas implemented on a generic computer in view of the Supreme Court Decision in Alice Corporation Pty. Ltd. v. CLS Bank International, et al. and 2019 PEG.
Dependent Claims
The following dependent claims merely further define the abstract idea and are, therefore, directed to an abstract idea for similar reasons:
Claims 2-20 recitations further limits the abstract idea above, merely further defines the mental process or mathematical equations discussed above.
Taken alone and in combination, the additional elements do not integrate the judicial exception into a practical application at least because the abstract idea is not applied, relied on, or used in a meaningful way. They also do not add anything significantly more than the abstract idea. Their collective functions merely provide computer/electronic implementation and processing, and no additional elements beyond those of the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. There is no indication that the combination of elements improves the functioning of a computer, output device, improves technology other than the technical field of the claimed invention, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation “reflectance images of a target skin area” in lines 6-7. It is unclear of the link to “reflectance images of a target skin area” as recited in line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “the reflectance images” in line 8. It is unclear of the link to “a series of equally spaced in time reflectance images”. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “each segment” in line 9. It is unclear of the link to “several non-overlapping spatial segments”. Line 10 further recites, “each spatial segment”, and “the segment” it is again unclear the link. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “the reflectance” in line 10. It is unclear of the link to “a series of equally spaced in time reflectance images”. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “the pixels”. It is unclear of the link to “an array of pixels”. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “each point in time” line 11. It is unclear of the scope of each point in time. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “the time series of the average reflectance” in line 12. It is unclear of the link of “the time series” to “a series of equally spaced in time reflectance images”. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “the PPG signals” line 14. It is unclear of the link to “a set of PPG signals”. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “the useful tissue information”. It is unclear of the link to “useful information about blood flow in tissues”. There is insufficient antecedent basis for this limitation in the claim.
Claim 2 recites the limitation “the reflectance signal”. It is unclear of the link to “a series of equally spaced in time reflectance images”. It is further unclear of the link between the video and the reflectance images that are taken at spaced apart times, claim 2 now processes ppg data from the images and the video but it is unclear how the processing is performed on camera ppg signals and video ppg signals because the claim merges language between the signals.
Claim 2 recites the limitation “the extracted average PPG signals” are unclear what they are referring to as discussed above. It is unclear if these are now image or video data. There is insufficient antecedent basis for this limitation in the claim.
Claim 3 recites the limitation “the ratio of the sum of Fourier Transform coefficients” and “the zeroth Fourier Transform coefficient”. There is insufficient antecedent basis for this limitation in the claim.
Claim 4 recites the limitation “the block of pixels”; “the vertical field of view”; “the vertical field of view per pixel”; “the largest number of pixels”; “the desired accuracy”. It is unclear of the link to “blocks of pixels”. There is insufficient antecedent basis for these limitation in the claim.
Claim 5 recites the limitation “a pulsation indicia”. There is insufficient antecedent basis for this limitation in the claim.
Claim 5 recites the limitation “the pulsation indicia measurement”. There is insufficient antecedent basis for this limitation in the claim.
Claim 5 recites the limitation “its reliability”. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 recites the limitation “the tissue surface”. There is insufficient antecedent basis for this limitation in the claim.
Claim 7 recites the limitation “a video”. There is insufficient antecedent basis for this limitation in the claim.
Claim 7 recites the limitation “the reflectance”. There is insufficient antecedent basis for this limitation in the claim.
Claim 7 recites the limitation “the location of the transition between segments with high and low pulsation indicia”. There is insufficient antecedent basis for this limitation in the claim.
Claim 7 recites the limitation “the jugular venous pulse and pressure”. There is insufficient antecedent basis for this limitation in the claim.
Claim 8 recites the limitation “the skin”. There is insufficient antecedent basis for this limitation in the claim.
Claim 9 recites the limitation “the PPG signal”. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 recites the limitation “the PPG signal”. There is insufficient antecedent basis for this limitation in the claim.
Claim 12 recites the limitation “the target skin”. There is insufficient antecedent basis for this limitation in the claim.
Claim 13 recites the limitation “the reflectance signal”. There is insufficient antecedent basis for this limitation in the claim.
Claim 13 recites the limitation “the data”; “the PPG measurements”; “the time delay between each segment”; “the pulse transit time”; “the wave velocity” There is insufficient antecedent basis for this limitation in the claim.
Claim 14 recites the limitation “fps” which is never defined.
Claim 15 recites the limitation “the reflectance signal”. There is insufficient antecedent basis for this limitation in the claim.
Claim 15 recites the limitation “the data”; “the PPG measurements”; “the time delay between each segment”; “the mean arterial pressure”. There is insufficient antecedent basis for this limitation in the claim.
Claim 16 recites the limitation “the skin”; “the images”; There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites the limitation “the target area”. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 6, -12 and 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yoshizawa et al. (US 2020/0288996 A1) (“Yoshizawa”).
Regarding claim 1, Yoshizawa discloses A system for measuring changes in the blood volume in a tissue (plethysmography) comprising (Abstract and entire document):
a camera ([0048], “The video obtaining device 300 is an video camera having a image sensor using a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) as an imaging device. Three or more light receiving elements such as R (red), G (green), and B (blue) can be mounted on the video obtaining device 300. Further, the video obtaining device 300 may be a reflective photosensor equipped with a green LED.”),
the camera in communication with a non-transitory computer-readable memory and a processor ([0045 – 0046], “The memory 60 is a data storing device that stores various data and programs. In this embodiment, the memory 60 functions as a data storing device, which alternatively may be a volatile memory, such as a RAM (Random Access Memory), a non-volatile memory such as ROM, flash memory, a HDD (Hard Disk Drive), an SSD (Solid State Device), and an optical disk.” And [0044], “cpu 10”; “processor 110”), and
the memory and processor being in communication with a display device ([0051], “An example of the outputting device 400 is a display such as a CRT (Cathode Ray Tube), an LCD (Liquid Crystal Display), or an organic electroluminescence display (Organic Light-Emitting Diode Display). The outputting device 400 can display information processed by a processor 110, information to be stored in the storing unit 160, and the like.”),
the camera being located to be able to record reflectance images of a target skin area ([0048], “The video obtaining device 300 takes a video of a predetermined part of the body of the subject, and obtains a video signal of the predetermined part.” And [0049]);
wherein the camera, processor, memory and display device are configured so that: the camera records a series of equally spaced in time reflectance images of a target skin area ([0121], 120 fps, which is equally spaced apart images. See also, [0096 – 0100], “The skin region extracting unit 122 executes the above skin region extracting process for each frame constituting the video, and sequentially transmits the coordinates of the skin region in each frame to the video pulse wave extracting unit 123.”);
the reflectance images are segmented by the processor into several non-overlapping spatial segments, each segment comprising an array of pixels ([0096 – 0100], “The skin region extracting unit 122 executes the above skin region extracting process for each frame constituting the video, and sequentially transmits the coordinates of the skin region in each frame to the video pulse wave extracting unit 123.”);
the reflectance for each spatial segment over the pixels in the segment is averaged for each point in time ([0100], “The video pulse wave extracting unit 123 extracts the brightness value of the green light by applying a green filter to the skin region of each frame of the video, or by using the brightness value of “G (green)”. Then, the video pulse wave extracting unit 123 extracts the video pulse wave having a temporal change curve by calculating the average values of the brightness value of the green light for each frame.”);
the time series of the average reflectance for each spatial segment is used to extract a set of PPG signals ([0100], “The video pulse wave extracting unit 123 extracts the brightness value of the green light by applying a green filter to the skin region of each frame of the video, or by using the brightness value of “G (green)”. Then, the video pulse wave extracting unit 123 extracts the video pulse wave having a temporal change curve by calculating the average values of the brightness value of the green light for each frame.” Pulse wave extracted);
the PPG signals are processed for to extract useful information about blood flow in tissues ([0106], feature point extraction, [0113] blood pressure value, [0136] pulse transit time); and
the useful tissue information is communicated to a user ([0118] display, [0132], 0136] useful tissue information displayed).
Regarding claim 2, Yoshizawa discloses The system of claim 1, where the useful tissue information is a pulsation indicia, further comprising ([0106], feature point extraction, [0113] blood pressure value, [0136] pulse transit time):
the camera recording at least 6 seconds of video at at least 20 frames per second ([0121], “120 fps” and see FIG. 3 showing at least 10 seconds and [0122 – 0123] discussing further timing);
segmenting the video into blocks of pixels; averaging the reflectance signal for each channel in the video ([0100], “The video pulse wave extracting unit 123 extracts the brightness value of the green light by applying a green filter to the skin region of each frame of the video, or by using the brightness value of “G (green)”. Then, the video pulse wave extracting unit 123 extracts the video pulse wave having a temporal change curve by calculating the average values of the brightness value of the green light for each frame.” Pulse wave extracted);
the step of processing the PPG signals comprises: applying a Fourier Transform or Fast Fourier Transform to the extracted average PPG signals, and using the Fourier Transform coefficients to calculate a pulsation indicia for each segment (FIG. 12 and [0166] discussing Fourier transform and frequency domain to calculate pulse wave characteristics/pulsation indicia).
Regarding claim 3, Yoshizawa discloses The system of claim 2, where the pulsation indicia is calculated as the ratio of the sum of Fourier Transform coefficients corresponding to 0.5-3Hz to the zeroth Fourier Transform coefficient ([0173 - 0180], “As shown in Expression (16), the distortion ratio R.sub.d is the ratio of the sum of the Fourier coefficient b.sub.i of the heartbeat high-frequency component to the sum b.sub.j of the Fourier coefficient of the heartbeat basic component.”).
Regarding claim 6, Yoshizawa discloses The system of claim 2, where the pulsation indicia is calculated for skin displacement caused by pulse wave propagation through blood vessels measured by specular reflection from the tissue surface ([0099], “Light irradiated on the skin of the subject is scattered and absorbed by the subcutaneous tissue, and a part of the light is reflected back to the surface of the skin. At this time, the intensity of reflected light fluctuates depending on the subcutaneous blood flow rate because the light is absorbed by the hemoglobin contained in the blood flow.”).
Regarding claim 8, Yoshizawa discloses The system of claim 6, where the target skin area is pre-treated with a substance that increases the specular reflection of the skin (Pretreatment does not change the system itself).
Regarding claim 9, Yoshizawa discloses The system of claim 6, where the camera is an RGB or RGB-NIR camera and the output of the any or all channels is used to extract the PPG signal ([0048], “Three or more light receiving elements such as R (red), G (green), and B (blue) can be mounted on the video obtaining device 300.”).
Regarding claim 10, Yoshizawa discloses The system of claim 2, where tissue viability assessment is derived from the pulsation indicia ([0106], feature point extraction, [0113] blood pressure value, [0136] pulse transit time).
Regarding claim 11, Yoshizawa discloses The system of claim 10, where the camera is an RGB camera and the output of the green (G) channel is used to extract the PPG signal ([0100], “The video pulse wave extracting unit 123 extracts the brightness value of the green light by applying a green filter to the skin region of each frame of the video, or by using the brightness value of “G (green)”. Then, the video pulse wave extracting unit 123 extracts the video pulse wave having a temporal change curve by calculating the average values of the brightness value of the green light for each frame.” Pulse wave extracted).
Regarding claim 12, Yoshizawa discloses The system of claim 10, where the target skin is illuminated with light in the 540-570 nm wavelengths ([0048], “Three or more light receiving elements such as R (red), G (green), and B (blue) can be mounted on the video obtaining device 300.”).
Regarding claim 16, Yoshizawa discloses The system of claim 1, where an optical clearing agent is applied to the skin before recording of the images (Pretreatment does not change the system itself).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshizawa et al. (US 2020/0288996 A1) (“Yoshizawa”) in view of De Haan (US 2016/0171684 A1) (“De Haan”).
Regarding claim 4, Yoshizawa discloses The system of claim 2, Yoshizawa fails to disclose wherein the block of pixels has a maximum NxN size determined by dividing the vertical field of view of the video by the number of pixels in the vertical field of view to obtain the vertical field per pixel, and then setting the maximum block size to the largest number of pixels that will not exceed the desired accuracy.
However, in the same field of endeavor, De Haan teaches wherein the block of pixels has a maximum NxN size determined by dividing the vertical field of view of the video by the number of pixels in the vertical field of view to obtain the vertical field per pixel, and then setting the maximum block size to the largest number of pixels that will not exceed the desired accuracy ([0072] discussing blocking pixels for accuracy within bins determining a max size of the block of pixels).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include wherein the block of pixels has a maximum NxN size determined by dividing the vertical field of view of the video by the number of pixels in the vertical field of view to obtain the vertical field per pixel, and then setting the maximum block size to the largest number of pixels that will not exceed the desired accuracy as taught by to achieve good noise levels [0072], “To achieve such good noise level, pixels may be grouped into blocks (fixed grid) as initial segmentation.”).
Regarding claim 7, Yoshizawa discloses The system of claim 6, Yoshizawa fails to disclose where the specular reflection is used for jugular venous pulse monitoring, further comprising: recording a video having at least one channel of the reflectance of the right or left side of the neck from the sternum to the earlobe; and
However, in the same field of endeavor, De Haan teaches where the specular reflection is used for jugular venous pulse monitoring, further comprising: recording a video having at least one channel of the reflectance of the right or left side of the neck from the sternum to the earlobe (FIG. 1-2); and
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include where the specular reflection is used for jugular venous pulse monitoring, further comprising: recording a video having at least one channel of the reflectance of the right or left side of the neck from the sternum to the earlobe as taught by De Haan to improve risk stratification ([0026], “It has been found that a vascular displacement waveform, e.g. indicative of a displacement of a carotid artery, can provide valuable information regarding the vascular system. As the displacement waveform closely resembles the (aortic) central pressure waveform, its assessment is recognized as an opportunity for improving cardiovascular risk stratification.”).
Yoshizawa as modified further discloses using the location of the transition between segments with high and low pulsation indicia to determine the jugular venous pulse and pressure (Yoshizawa [0100], “The video pulse wave extracting unit 123 extracts the brightness value of the green light by applying a green filter to the skin region of each frame of the video, or by using the brightness value of “G (green)”. Then, the video pulse wave extracting unit 123 extracts the video pulse wave having a temporal change curve by calculating the average values of the brightness value of the green light for each frame.” Pulse wave extracted and [0115], “The measuring unit 141 reads the conversion information from the reference information storing unit 173, and converts the waveform distortion into a blood pressure value using the conversion information”).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshizawa et al. (US 2020/0288996 A1) (“Yoshizawa”) in view of Ray et al. (US 2017/0196497 A1) (“Ray”).
Regarding claim 5, Yoshizawa discloses The system of claim 2, Yoshizawa fails to disclose where the determination of a pulsation indicia is made on a continuous basis, and the system automatically makes adjustments to improve the accuracy and repeatability of the determination of the pulsation indicia by the system, and indicates both the pulsation indicia measurement and a measure of its reliability to the user.
However, in the same field of endeavor, Ray teaches where the determination of a pulsation indicia is made on a continuous basis, and the system automatically makes adjustments to improve the accuracy and repeatability of the determination of the pulsation indicia by the system, and indicates both the pulsation indicia measurement and a measure of its reliability to the user ([0076] continuous monitoring and [0038] and claim 8 for machine learning and updating and reliability).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include where the determination of a pulsation indicia is made on a continuous basis, and the system automatically makes adjustments to improve the accuracy and repeatability of the determination of the pulsation indicia by the system, and indicates both the pulsation indicia measurement and a measure of its reliability to the user as taught by Ray to improve the system ([0038], “to improve reliability”).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshizawa et al. (US 2020/0288996 A1) (“Yoshizawa”) in view of Kim (US 2023/0157561 A1) (“Kim”).
Regarding claim 14, Yoshizawa discloses The system of claim 13, Yoshizawa fails to disclose where the camera uses a rolling shutter and a frame rate of at least 20 fps.
However, in the same field of endeavor, Kim teaches where the camera uses a rolling shutter and a frame rate of at least 20 fps ([0202], “Accordingly, when the image sensor of the electronic device 100 according to an embodiment captures an image in a rolling shutter manner, the PPG signal may be acquired by summing pixel values on a row basis of the 2D array. According to an embodiment of the disclosure, summing the pixel values on a row basis may allow a signal processing speed (a sampling rate) to be increased, compared to that in a scheme of summing all pixel values of the two-dimensional array.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include as taught by Kim to increase sampling rate ([0202], “Accordingly, when the image sensor of the electronic device 100 according to an embodiment captures an image in a rolling shutter manner, the PPG signal may be acquired by summing pixel values on a row basis of the 2D array. According to an embodiment of the disclosure, summing the pixel values on a row basis may allow a signal processing speed (a sampling rate) to be increased, compared to that in a scheme of summing all pixel values of the two-dimensional array.”).
Claims 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshizawa et al. (US 2020/0288996 A1) (“Yoshizawa”) in view of Burton (US 2021/0169417 A1) (“Burton”).
Regarding claim 13, Yoshizawa discloses The system of claim 1, where the useful tissue information is pulse wave velocity measurements ([0078] discussing PTT and velocity pulse wave), and
segmenting the video into at least two linearly arranged non-overlapping segments; averaging the reflectance signal for each channel in the video ([0100], “The video pulse wave extracting unit 123 extracts the brightness value of the green light by applying a green filter to the skin region of each frame of the video, or by using the brightness value of “G (green)”. Then, the video pulse wave extracting unit 123 extracts the video pulse wave having a temporal change curve by calculating the average values of the brightness value of the green light for each frame.” Pulse wave extracted);
the step of processing the PPG signals comprises: applying smoothing filters; applying moving average filters, detrending the data, and cross-correlating the PPG measurements to find the time delay between each segment, and using the time delay to calculate the pulse transit time; and the pulse transit time is used to calculate the wave velocity ([0078 – 0084], [0100], and [0078] discussing PTT and velocity pulse wave).
Yoshizawa fails to disclose the camera recording at least 10 seconds of video at at least 1000 frames per second;
However, in the same field of endeavor, Burton teaches the camera recording at least 10 seconds of video at at least 1000 frames per second ([1873] 1000 fps);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include the camera recording at least 10 seconds of video at at least 1000 frames per second as taught by Burton to reach the speed requirement ([1873]).
Regarding claim 15, Yoshizawa discloses The system of claim 1, where the useful tissue information is remote blood pressure assessment comprising ([0106], feature point extraction, [0113] blood pressure value, [0136] pulse transit time):
segmenting the video into two non-overlapping segments; averaging the reflectance signal for each channel in the video ([0100], “The video pulse wave extracting unit 123 extracts the brightness value of the green light by applying a green filter to the skin region of each frame of the video, or by using the brightness value of “G (green)”. Then, the video pulse wave extracting unit 123 extracts the video pulse wave having a temporal change curve by calculating the average values of the brightness value of the green light for each frame.” Pulse wave extracted);
the step of processing the PPG signals comprises: applying smoothing filters; applying moving average filters, detrending the data, and cross-correlating the PPG measurements to find the time delay between each segment, and using the time delay to calculate the mean arterial pressure ([0078 – 0084], [0100], and [0078] discussing PTT and velocity pulse wave).
Yoshizawa fails to disclose the camera recording at least 10 seconds of video at at least 1000 frames per second;
However, in the same field of endeavor, Burton teaches the camera recording at least 10 seconds of video at at least 1000 frames per second ([1873] 1000 fps);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include the camera recording at least 10 seconds of video at at least 1000 frames per second as taught by Burton to reach the speed requirement ([1873]).
Claims 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshizawa et al. (US 2020/0288996 A1) (“Yoshizawa”) in view of Smith et al. (US 2024/026040 A1) (“Smith”).
Regarding claim 17, Yoshizawa discloses The system of claim 1, Yoshizawa fails to disclose further comprising a means to measure distance from the target area to the camera, recording is automatically initiated once the target area is a pre-determined distance from the camera.
However, in the same field of endeavor, Smith teaches further comprising a means to measure distance from the target area to the camera, recording is automatically initiated once the target area is a pre-determined distance from the camera ([0013], “The general purpose of the reference element is to provide a reference having known characteristics (e.g., size, shape, distance, etc.) to enable one or more mathematical calculations relevant to the condition to be monitored to be carried out.” And see [0031] discussing automatic measurements).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include further comprising a means to measure distance from the target area to the camera, recording is automatically initiated once the target area is a pre-determined distance from the camera as taught by Smith to increase accuracy ([0030], “In some embodiments, the imager may capture more of the reference element in the video, and in such cases, the greater capture of the reference element can enable more accurate determinations of JVP.”).
Regarding claim 18, Yoshizawa discloses The system of claim 17, Yoshizawa fails to disclose where the means to measure distance from the target area to the camera is a reference object.
However, in the same field of endeavor, Smith teaches where the means to measure distance from the target area to the camera is a reference object ([0013], “The general purpose of the reference element is to provide a reference having known characteristics (e.g., size, shape, distance, etc.) to enable one or more mathematical calculations relevant to the condition to be monitored to be carried out.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include where the means to measure distance from the target area to the camera is a reference object as taught by Smith to increase accuracy ([0030], “In some embodiments, the imager may capture more of the reference element in the video, and in such cases, the greater capture of the reference element can enable more accurate determinations of JVP.”).
Regarding claim 19, Yoshizawa discloses The system of claim 17, Yoshizawa fails to disclose where the pre-determined distance is selected to increase the robustness of the measurement of the useful information.
However, in the same field of endeavor, Smith teaches where the pre-determined distance is selected to increase the robustness of the measurement of the useful information ([0022], “In some embodiments, an imaging device such as a camera can be mounted on a wall with high resolution and a wide field of view. The wide field of view can capture the neck of a patient. A patient or user can place a locator object and/or reference object in the patient's sternal notch and press a button that activates the imaging device. The imaging device does not need to move, given its wide field of view, and can estimate the JVP as it is imaging the reference object.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include where the pre-determined distance is selected to increase the robustness of the measurement of the useful information as taught by Smith to increase accuracy ([0030], “In some embodiments, the imager may capture more of the reference element in the video, and in such cases, the greater capture of the reference element can enable more accurate determinations of JVP.”).
Regarding claim 20, Yoshizawa discloses The system of claim 1, Yoshizawa fails to disclose where image registration is used to remove motion artifacts received from the camera.
However, in the same field of endeavor, Smith teaches where image registration is used to remove motion artifacts received from the camera ([0022], “In some embodiments, an imaging device such as a camera can be mounted on a wall with high resolution and a wide field of view. The wide field of view can capture the neck of a patient. A patient or user can place a locator object and/or reference object in the patient's sternal notch and press a button that activates the imaging device. The imaging device does not need to move, given its wide field of view, and can estimate the JVP as it is imaging the reference object.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system as taught by Yoshizawa to include where image registration is used to remove motion artifacts received from the camera as taught by Smith to increase accuracy ([0030], “In some embodiments, the imager may capture more of the reference element in the video, and in such cases, the greater capture of the reference element can enable more accurate determinations of JVP.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH A TOMBERS whose telephone number is (571)272-6851. The examiner can normally be reached on M-TH 7:00-16:00, F 7:00-11:00(Eastern).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Chen can be reached on 571-272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH A TOMBERS/Examiner, Art Unit 3791