Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Priority
The domestic benefit claim to the provisional application AP 61022752 is improper and not recognized because the subsequent application of PCT/EP/2021/061962 (371 national stage application) has a filing date that is more than 12 months from the filing date of the provisional application. Thus, for this examination, the application has been examined with the priority of the PCT/EP/2021/061962 which was filed on 5/6/2021. Please see miscellaneous communication filed to applicant on 12/17/2025 for further details requesting a response to correct the filing receipt.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“a wearable device worn by the subject” in claim 9.
Best support seems to come from para 23 of applicant’s specification received on 11/8/2022 which recites the following:
“The computing devices depicted in Fig. 1 may include, for example, one or more of: a desktop computing device, a laptop computing device, a tablet computing device, a mobile phone computing device, a computing device of a vehicle of the user (e.g., an in-vehicle communications system, an in-vehicle entertainment system, an in-vehicle navigation system), a standalone interactive speaker (which in some cases may include a vision sensor), a smart appliance such as a smart television ( or a standard television equipped with a networked dongle with automated assistant capabilities), and/or a wearable apparatus of the user that includes a computing device (e.g., a watch of the user having a computing device, glasses of the user having a computing device, a virtual or augmented reality computing device). Additional and/ or alternative computing devices may be provided.”
Based on the disclosure a wearable device is a computerized watch, computerized glasses or augmented computing device
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States.
Claim(s) 1 is/are rejected under pre-AIA 35 U.S.C. 102(b) as being anticipated by Izci et al (Paper entitled: “Cardiac Arrhythmia Detection from 2D ECG Images by Using Deep Learning Technique” cited in non-patent literature as reference 2 on IDS received on 11/8/2022 and copy provided by applicant).
Independent claim:
Regarding claim 1
Izci discloses
A method [see pg. 2 left column last paragraph under I introduction… “This paper proposes a deep learning based new method for detection of five different ECG arrhythmia types. 2-D CNN approach tested on ECG signals that were obtained from MIT-BIH database.”] implemented using one or more processors [see pg. 3 first paragraph under III. Results…. “All experiments were carried out with Intel I7 8300 CPU.”], comprising:
generating a two-dimensional image based on vectorcardiography ("VCG") data, wherein the VCG data is recorded directly or is based on electrocardiogram ("ECG") data measured from a subject [see pg. 2 left column first paragraph under A. Database and segmentation… “ECG signals were taken from MIT-BIH arrhythmia database [21]. The database contains different beat types, which are obtained from 48 records of 47 volunteers.” And pg. 2 right column first paragraph under B. Image Formation… “Despite traditional methods, ECG signals were examined with 2-D image formation in this study. After beat segmentation., each heartbeat was converted into 2-D images.”];
applying the two-dimensional image as input across a machine learning model to generate output, wherein the machine learning model is configured for use in processing two-dimensional images [see pg. 3 first paragraph…. “Deep learning is a part of artificial neural network structure that has differences from conventional machine learning techniques [25]. It includes more than three layers which has many hidden layers. CNN architecture was used in this study which is one of the popular deep learning architectures. It was selected due to success of 2-D data classification [26]. 2-D ECG images directly used as an input that 110 need removing noise or extracting features. Proposed CNN architecture includes two convolution layer, two pooling layer and a folly connected layer.” And pg. 3 first paragraph under III results… “2-D CNN model was used for differentiate five different arrhythmia types. ECG signals were converted image formation after separating their ECG beats. For training and testing phases or CNN model Keras and TensorFlow libraries were implemented to the model.”]; and
determining a health condition of the subject based on the output [see pg. 3 first paragraph under III results… “2-D CNN model was used for differentiate five different arrhythmia types.”]
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2, 9-12 and 15 is/are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Izci in view of Villongco et al (US 20190332729) hereafter known as Villongco.
Independent claim:
Regarding claim 10:
A device comprising a processor [see pg. 3 first paragraph under III. Results…. “All experiments were carried out with Intel I7 8300 CPU.”] where the processor, cause the device to:
generate a two-dimensional image based on vectorcardiography ("VCG") data, wherein the VCG data is either measured directly or is based on electrocardiogram ("ECG") data measured from a subject [see pg. 2 left column first paragraph under A. Database and segmentation… “ECG signals were taken from MIT-BIH arrhythmia database [21]. The database contains different beat types, which are obtained from 48 records of 47 volunteers.” And pg. 2 right column first paragraph under B. Image Formation… “Despite traditional methods, ECG signals were examined with 2-D image formation in this study. After beat segmentation., each heartbeat was converted into 2-D images.”];
apply the multi-layered two-dimensional image as input across a machine learning model to generate output, wherein the machine learning model is configured for use in processing two dimensional images [see pg. 3 first paragraph…. “Deep learning is a part of artificial neural network structure that has differences from conventional machine learning techniques [25]. It includes more than three layers which has many hidden layers. CNN architecture was used in this study which is one of the popular deep learning architectures. It was selected due to success of 2-D data classification [26]. 2-D ECG images directly used as an input that 110 need removing noise or extracting features. Proposed CNN architecture includes two convolution layer, two pooling layer and a folly connected layer.” And pg. 3 first paragraph under III results… “2-D CNN model was used for differentiate five different arrhythmia types. ECG signals were converted image formation after separating their ECG beats. For training and testing phases or CNN model Keras and TensorFlow libraries were implemented to the model.”]; and
determine a health condition of the subject based on the output [see pg. 3 first paragraph under III results… “2-D CNN model was used for differentiate five different arrhythmia types.”].
However, while Izci discloses a processor that uses neural networks, Izci is silent as to the structure of the whole device including of the processing system and therefore it is not explicitly clear whether or not Izci recites memory with instructions. Therefore, Izci fails to disclose “memory, wherein the memory stores instructions that, in response to execution of the instructions by the processor”.
Villongco discloses in the analogous art of ECG and/or VCG cardiovascular diagnostics [see para 59… “The measurements can be represented via a cardiogram such as an electrocardiogram (“ECG”) and a vectorcardiogram (“VCG”), an electroencephalogram (“EEG”), and so on. In some embodiments, a machine learning based on modeled output (“MLMO”) system is provided to generate a classifier by modeling electromagnetic output of the electromagnetic source for a variety of source configurations and using machine learning to train a classifier using derived electromagnetic data that is derived from the modeled electromagnetic output as training data.”] that a known diagnostic device design includes a processing system with both a processor and a memory with instructions (i.e. “memory, wherein the memory stores instructions that, in response to execution of the instructions by the processor”) along with an output device in the form of a display [see para 88… “The computing systems (e.g., network nodes or collections of network nodes) on which the MLMO system and the other described systems may be implemented may include a central processing unit, input devices, output devices (e.g., display devices and speakers), storage devices (e.g., memory and disk drives)” and “The computer-readable storage media are tangible storage means that do not include a transitory, propagating signal. Examples of computer-readable storage media include memory such as primary memory, cache memory, and secondary memory (e.g., DVD) and other storage. The computer-readable storage media may have recorded on them or may be encoded with computer-executable instructions or logic that implements the MLMO system and the other described systems.”] and input device in the form of leads configured to provide ECG measurements of physiological parameters which is connected to and worn on a smart watch [see para 77… “The electromagnetic fields can be measured by various measuring devices (e.g., electrocardiograph and electroencephalograph) using, for example, one or more (e.g., 12) leads connected to electrodes attached to or adjacent to (e.g., via a smart watch device) a patient's body”].
Since Izci is silent as all the details of the device including the entire processing system and Villongco discloses that a known device system for ECG /VCG diagnostics, it would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Izci’s device to include both memory with instructions along with a display and ECG leads worn on a smart watch similarly to that disclosed by Villongco as this is a known ECG/VCG device design.
Dependent claims
Regarding claims 2, 9, and 15:
Izci discloses the invention substantially as claimed including all the limitations of claim 1 as outlined above.
However, Izci is silent as to the structure of the whole device. Therefore, Izci fails to disclose “wherein the ECG data comprises multiple waveforms corresponding to multiple ECG leads” as recited by claim 2, “wherein the ECG data comprises single lead data obtained from a wearable device worn by the subject” as recited by claim 9, or “At least one non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the method of claim 1” as recited by claim 15.
Villongco discloses in the analogous art of ECG and/or VCG cardiovascular diagnostics [see para 59…. “The measurements can be represented via a cardiogram such as an electrocardiogram (“ECG”) and a vectorcardiogram (“VCG”), an electroencephalogram (“EEG”), and so on. In some embodiments, a machine learning based on modeled output (“MLMO”) system is provided to generate a classifier by modeling electromagnetic output of the electromagnetic source for a variety of source configurations and using machine learning to train a classifier using derived electromagnetic data that is derived from the modeled electromagnetic output as training data.”] a known diagnostic device design includes a processing system with both, processor and a memory with instructions (i.e. “at least one non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the method of claim 1”) with an output device in the form of a display [see para 88… “The computing systems (e.g., network nodes or collections of network nodes) on which the MLMO system and the other described systems may be implemented may include a central processing unit, input devices, output devices (e.g., display devices and speakers), storage devices (e.g., memory and disk drives)” and “The computer-readable storage media are tangible storage means that do not include a transitory, propagating signal. Examples of computer-readable storage media include memory such as primary memory, cache memory, and secondary memory (e.g., DVD) and other storage. The computer-readable storage media may have recorded on them or may be encoded with computer-executable instructions or logic that implements the MLMO system and the other described systems.”] and an input device in the form of leads configured to provide ECG measurements of physiological parameters which is connected to and worn on a smart watch (i.e. “wherein the ECG data comprises multiple waveforms corresponding to multiple ECG leads” and “wherein the ECG data comprises single lead data obtained from a wearable device worn by the subject”) [see para 77… “The electromagnetic fields can be measured by various measuring devices (e.g., electrocardiograph and electroencephalograph) using, for example, one or more (e.g., 12) leads connected to electrodes attached to or adjacent to (e.g., via a smart watch device) a patient's body”].
Since Izci is silent as all the details of the device including the entire processing system and Villongco discloses that a known device system for ECG /VCG diagnostics, it would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Izci system to include both memory with instructions, a display and leads configured to provide ECG measurements of physiological parameters which is connected to and worn on a smart watch similarly to that disclosed by Villongco as this is a known ECG/VCG device design.
Regarding claims 11-12:
See rejection to claim 10 above which recites a device with a smart watch (i.e. wearable device worn by the subject) and a plurality of leads for ECG measurements device (i.e. ECG comprises multiple waveforms corresponding to multiple ECG leads).
Claims 3-4, 6-8 and 13 rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Izci in view of Villongco as applied to claims 1-2 and 10-12 above, and further in view of Olson et al (US 5803084) hereafter known as Olson.
Regarding claims 3-4 and 6-8
Izci in view of Villongco discloses the invention substantially as claimed including all the limitations of claims 1-2 as outlined above.
However, Izci in view of Villongco fails to disclose “converting each of the multiple waveforms into a respective single representative beat” as recited in claim 3, “converting the multiple representative beats into three VCG beats, wherein each VCG beat corresponds to a heart vector in one dimension of three-dimensional ("3D") space.” as recited by claim 4, “determining three VCG projections, each VCG projection representing a respective one of the three VCG beats on a spatial plane corresponding to a respective dimension of the 3D space.” as recited by claim 6.
Olson discloses in the analogous art of ECG and/or VCG diagnostics [see Col. 1 lines 5-15… “The present invention relates to devices and methods for displaying the electrical signals from the heart for analysis of heart malfunctions. The present invention comprises a three-dimensional (3-D) cardiographic display which displays at intermittent time intervals electrocardiograph (ECG) heart signals as a series of vectors on a single display and in a single 3-D system which represents each of the three bodily planes, namely, the frontal, the transverse, and the sagittal planes.”] that collecting a mean vector (i.e. single representative beat) provides an accurate representation of how the heart is functioning [see Col. 2 lines 64-67 and Col. 3 lines 1-30…“The resultant or mean vector of all these vectors is the resultant vector which is measured by the external electrodes and is called the QRS vector. As can be appreciated, other mean vectors are created over the other intervals in the ECG cycle in much the same manner are termed appropriately, namely, the mean T-vector and the mean P-vector.”] and converting the beats into three VCG beats wherein each beat corresponds to a heart vector in one dimension of 3D space with each VCG projection representing a respective one of the three VCG beats on a spatial plane corresponding to a respective dimension of the 3D space as way to help a physician visualize cardiac conditions [see Fig. 2A-2B elements 17-19 (i.e. three beats) and Col. 8 lines 20-42… “Although the 3-D vector display 10 is believed to be far superior than the other displays, by combining the 3-D vector display 10 with theses other displays on a single screen, it is believed that most, if not all, known heart conditions can be readily observed. For example, by also projecting the results or terminal points of the vectors 12 simultaneously onto each of the three respective planes (frontal, transverse and sagittal) of the 3-D vector display thereby forming 2-D vector cardiographic projections 17, 18 and 19 on the same screen as shown in FIGS. 2a and 2b, it is much easier for a physician to visualize conditions that may be hidden on the 3-D display 10 without rotating or expanding the display 10.”]
Since Izci in view of Villongco discloses one way determine cardiac health (i.e. use CNN) and Olson discloses another way of determining cardiac health conditions (i.e. take the mean and display beats in 3D space), it would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Izci in view of Villongco’s processor and display to take the mean of multiple waveforms and convert the beats into three VCG beats wherein each beat corresponds to a heart vector in one dimension of 3D space with each VCG projection representing a respective one of the three VCG beats on a spatial plane corresponding to a respective dimension of the 3D space on the display because the combination of two independent to determine cardiac information would lead one of ordinary skill in the art to expect the accumulation of greater accuracy leading to the expectation of a more accurate determination of cardiac health.
Regarding claims 7-8:
Izci in view of Villongco in view of Olson discloses the invention substantially as claimed including all the limitations of claims 1-4 and 6 as outlined above. Also, Izci in view of Villongco in view of Olson discloses using a convolutional neural network (CNN)
However, Izci in view of Villongco in view of Olson fails to disclose all the details of how the CNN trains and classifies. Thus, Izci in view of Villongco in view of Olson fails to disclose: “encoding the three VCG projections into three corresponding layers of the two-dimensional image.” as recited by claim 7 or “wherein the three corresponding layers comprise red, green, and blue” as recited by claim 8.
Villongco further discloses a known way a CNN trains and classifies is using three corresponding layers comprising red, green, and blue [see para 90… “The convolutional neural network may be one-dimensional in the sense that it inputs an image that is a single row of pixels with each pixel having a red, green, and blue (“RGB”) value. The MLMO system sets the values of the pixels based on the voltages of a VCG of the training data. The image has the same number of pixels as vectors of a VCG of the training data. The MLMO system sets the red, green, and blue values of a pixel of the image to the x, y, and z values of the corresponding vector of the VCG. For example, if a cycle of a VCG is 1 second long, and the VCG has a vector for each millisecond, then the image is 1 by 1000 pixels. The one-dimensional convolutional neural network (“1D CNN”) trainer 310 learns the weights of activation functions for the convolutional neural network using the training data 301.”]
It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Izci in view of Villongco in view of Olson in view of to train and classify using three corresponding layers comprising red, green and blue similarly to that disclosed by Villongco (i.e. thereby reciting claims 7-8) because this is a known way to train a CNN in the analogous field of ECG and/or VCG cardiovascular diagnostics.
Regarding claim 13:
Izci in view of Villongco discloses the invention substantially as claimed including all the limitations of claims 10-12 as outlined above.
However, Izci in view of Villongco fails to disclose:
“further comprising instructions to:
convert each of the multiple waveforms into a respective single representative beat;
convert the multiple representative beats into a plurality of VCG beats, wherein each VCG beat corresponds to a heart vector in one dimension of multi-dimensional space; and
determine a plurality of VCG projections, each VCG projection representing a respective one of the plurality of VCG beats on a spatial plane corresponding to a respective dimension of the multi-dimensional space” as recited by claim 13.
Olson discloses in the analogous art of ECG and/or VCG diagnostics [see Col. 1 lines 5-15… “The present invention relates to devices and methods for displaying the electrical signals from the heart for analysis of heart malfunctions. The present invention comprises a three-dimensional (3-D) cardiographic display which displays at intermittent time intervals electrocardiograph (ECG) heart signals as a series of vectors on a single display and in a single 3-D system which represents each of the three bodily planes, namely, the frontal, the transverse, and the sagittal planes.”] collecting a mean vector (i.e. single representative beat) provides an accurate representation of how the heart is functioning [see Col. 2 lines 64-67 and Col. 3 lines 1-30…“The resultant or mean vector of all these vectors is the resultant vector which is measured by the external electrodes and is called the QRS vector. As can be appreciated, other mean vectors are created over the other intervals in the ECG cycle in much the same manner are termed appropriately, namely, the mean T-vector and the mean P-vector.”] and converting the beats into three VCG beats wherein each beat corresponds to a heart vector in one dimension of 3D space with each VCG projection representing a respective one of the three VCG beats on a spatial plane corresponding to a respective dimension of the 3D space as way to help a physician visualize cardiac conditions [see Fig. 2A-2B elements 17-19 (i.e. three beats) and Col. 8 lines 20-42… “Although the 3-D vector display 10 is believed to be far superior than the other displays, by combining the 3-D vector display 10 with theses other displays on a single screen, it is believed that most, if not all, known heart conditions can be readily observed. For example, by also projecting the results or terminal points of the vectors 12 simultaneously onto each of the three respective planes (frontal, transverse and sagittal) of the 3-D vector display thereby forming 2-D vector cardiographic projections 17, 18 and 19 on the same screen as shown in FIGS. 2a and 2b, it is much easier for a physician to visualize conditions that may be hidden on the 3-D display 10 without rotating or expanding the display 10.” ]
Since Izci in view of Villongco discloses one way determine cardiac health (i.e. use CNN) and Olson discloses another form of determining cardiac health conditions (i.e. take the mean and display beats in 3D space), it would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Izci in view of Villongco’s processor, memory and display to take the mean of multiple waveforms and convert the beats into three VCG beats wherein each beat corresponds to a heart vector in one dimension of 3D space with each VCG projection representing a respective one of the three VCG beats on a spatial plane corresponding to a respective dimension of the 3D space on the display because the combination of two independent to determine cardiac information would lead one of ordinary skill in the art to to accumulate of greater data in said information leading to the expectation of a more accurate determination of cardiac health.
Claims 5 and 14 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Izci in view of Villongco in view of Olson as applied to claims 1-4 and 10, 12-13 above, and further in view of Mestha et al (US 20130345569) hereafter known as Mestha.
Regarding claim 5:
Izci in view of Villongco in view of Olson discloses the invention substantially as claimed including all the limitations of claim 1-4 as outlined above.
However, Izci in view of Villongco in view of Olson fails to disclose “upsampling the three VCG beats” as recited by claim 5.
Mestha discloses in the analogous art of ECG and/or VCG cardiovascular diagnostics [see para 18… “Since pulse signals from video images correlate with PPG and ECG peaks, the teachings hereof are directed to detecting such episodes by measuring peak-to-peak intervals from the blood volume (also called cardiac volumetric) signals extracted from time-series signals generated from video images of the subject.”] that upsampling the ECG related data will increase the accuracy of the data [see para 48… “To increase the accuracy of peak-to-peak interval, the time-series signal can also be pre-upsampled to a standard sampling frequency such as, for instance, 256 Hz.”]
It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Izci in view of Villongco in view of Olson’s VCG beats (i.e. ECG related data) similarly to that disclosed by Meshta because this will increase the accuracy of Izci in view of Villongco in view of Olson’s data.
Regarding claim 14:
Izci in view of Villongco in view of Olson discloses the invention substantially as claimed including all the limitations of claims 10 and 12-13 as outlined above.
However, Izci in view of Villongco in view of Olson fails to disclose ““further comprising instructions to upsample the plurality of VCG beats.” As recited by claim 14.
Mestha discloses in the analogous art of ECG and/or VCG cardiovascular diagnostics [see para 18… “Since pulse signals from video images correlate with PPG and ECG peaks, the teachings hereof are directed to detecting such episodes by measuring peak-to-peak intervals from the blood volume (also called cardiac volumetric) signals extracted from time-series signals generated from video images of the subject.”] that upsampling the ECG related data will increase the accuracy of the data [see para 48… “To increase the accuracy of peak-to-peak interval, the time-series signal can also be pre-upsampled to a standard sampling frequency such as, for instance, 256 Hz.”]
It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Izci in view of Villongco in view of Olson’s VCG beats (i.e. ECG related data) similarly to that disclosed by Meshta because this will increase the accuracy of Izci in view of Villongco in view of Olson’s data.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEBASTIAN X LUKJAN whose telephone number is (571)270-7305. The examiner can normally be reached Monday - Friday 9:30AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NIKETA PATEL can be reached at 571-272-4156. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SEBASTIAN X LUKJAN
/SXL/Examiner, Art Unit 3792
/NIKETA PATEL/Supervisory Patent Examiner, Art Unit 3792