Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of Claims
The present Office Action is pursuant to Applicant’s communication on 09-08-2023; This application has PRO of 63414196 10/07/2022.
Claim Objections
Claim(s) 15-20 are objected as follows:
Claim 15’s limitation “The system of claim 1”: the Office interprets “claim 1” as being a typo in place of “claim 14”;
Claim 16’s limitation “The system of claim 1”: the Office interprets “claim 1” as being a typo in place of “claim 14”;
Claim 17’s limitation “The system of claim 1”: the Office interprets “claim 1… reporti” as being a typo in place of “claim 14”; additionally “reporti” is interpreted as “reporting”;
Claim 18’s limitation “The system of claim 11”: the Office interprets “claim 11” as being a typo in place of “claim 14”;
Claim 19’s limitation “The system of claim 12”: the Office interprets “claim 12” as being a typo in place of “claim 18”;
Claim 20’s limitation “The system of claim 12”: the Office interprets “claim 12” as being a typo in place of “claim 18”.
Claim objections must be corrected and will not be held in abeyance.
Patentability Summary
Independent claim(s) 1, 14 and dependent claim(s) 2-13, 15-20 is/are directed to a technical solution to a technical problem associated with analyzing patient data, identifying features of the data indicative of at least one health outcome indicative of at least one health outcome of a set of health outcomes corresponding to a patient classification of the patient, predicting the health outcome from patient data captured based on first, second and/or third feature sets extracted from video data, audio data, and semantic text data, employing a model generated by a machine-learning engine and reporting the predicted health outcome.
Thus, based on the aforementioned summary, the combination of limitations corresponding to the aforementioned claim(s) is/are patent eligible.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-20 are rejected, consisting of independent claim(s): 1 and 14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. The rejection is based upon MEPP 2161.01 whose last paragraph is included here:
I. DETERMINING WHETHER THERE IS ADEQUATE WRITTEN DESCRIPTION FOR A COMPUTER-IMPLEMENTED FUNCTIONAL CLAIM LIMITATION
When examining computer-implemented functional claims, examiners should determine whether the specification discloses the computer and the algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function in sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor invented the claimed subject matter. Specifically, if one skilled in the art would know how to program the disclosed computer to perform the necessary steps described in the specification to achieve the claimed function and the inventor was in possession of that knowledge, the written description requirement would be satisfied. Id. If the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention including how to program the disclosed computer to perform the claimed function, a rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, for lack of written description must be made. For more information regarding the written description requirement, see MPEP § 2161.01- § 2163.07(b). The specification does not indicate that the Applicant had possession of the invention. See also LizardTech, Inc. v. Earth Res. Mapping, Inc., 424 F.3d 1336, 1343-46 (Fed.Cir. 2005) and MPEP § 2161.01- § 2163.07(b). Specifically, the following limitation(s) element(s) are at issue:
“analyzing … to identify … video features …. audio features … semantic text features…”; [claim(s) 1, 14]
“predicting … health outcome of … patient based on … first, second and/or third feature sets… generated by [a] … machine-learning engine”; [claim(s) 1, 14]
As currently written, Applicant’s claims read as a generic invention capable of said element(s). Regarding the aforementioned element(s), while the Applicant discusses these elements at a high level of generality (see disclosure citations), the disclosure’s reference to these limitation in general terms but not providing the specific technology details, said functional language constituting a wish, the Applicant describing a wish for a final result in the specification, or a wish for a final result(s). Disclosure of function alone does not satisfy the written description requirement; it amounts to little more than a wish for possession (see Eli Lilly, 119 F.3d at 1568, 43 USPQ2d at 1406 (written description requirement not satisfied by merely providing “a result that one might achieve if one made that invention”); In re Wilder, 736 F.2d 1516, 1521, 222 USPQ 369, 372-73 (Fed. Cir. 1984)). Not only has Applicant failed to provide support for the genus of said element(s), the specific examples provided by Applicant only illustrate the aforementioned steps at a high level of generality or abstraction, but also do not adequately disclose the specific algorithm and/or sequence of solution step(s) required to demonstrate possession of the aforesaid element(s).
The Office notes that one of skill in the art may be able to make and use an invention that comprises the aforementioned steps. However, this finding supports the enablement requirement of § 112(a) rather than the written description requirement – it should also be noted that connectedly, “predictability or lack thereof in the art refers to the ability of one skilled in the art to extrapolate the disclosed or known results to the claimed invention1”, a determination thereof which supports the enablement requirement, rather than the written description requirement2; it is even “possible for a specification to enable the practice of invention as broadly as it is claimed, and still not describe that invention”3 - it should also be noted that “conclusive evidence of a claim's enablement is not equally conclusive of that claim's satisfactory written description” and that “The ‘written description’ requirement implements the principle that a patent must describe the technology that is sought to be patented; the requirement serves both to satisfy the inventor’s obligation to disclose the technologic knowledge upon which the patent is based, and to demonstrate that the patentee was in possession of the invention that is claimed” Capon v. Eshhar, 418 F.3d 1349,1357, 76 USPQ2d 1078, 1084 (Fed. Cir. 2005). Further, the written description requirement promotes the progress of the useful arts by ensuring that patentees adequately describe their inventions in their patent specifications in exchange for the right to exclude others from practicing the invention for the duration of the patent’s term; it should be further noted that Compliance with the written description requirement is a question of fact which must be resolved on a case-by-case basis. Vas-Cath, Inc. v. Mahurkar, 935F.2d at 1563, 19 USPQ2d at 1116 (Fed. Cir. 1991)4. The Applicant is duly reminded that as stated in the MPEP regarding, “determining whether there is adequate written description for a computer-implemented functional claim limitation”, if “the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of skill in the art that the inventor possessed the invention, including how to program the disclosed computer to perform the claimed functions, a rejection under 35 U.S.C. 112, first paragraph for lack of written description must be made”5. Consequently, independent claim(s): 1, 14 and dependent claim(s) 2-13 and 15-20 is/are respectively rejected as lacking sufficient detail such that one of ordinary skill in the art can reasonably conclude that the Applicant invented the claimed subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Roots (US 2022/0310215) in view of Muzammel6.
Regarding claim(s) 1, 14, Roots discloses: A method for predicting a health outcome of a patient, A system for predicting a health outcome of a patient, the system comprising: a video camera configured to capture a video stream of a patient; a processor configured to receive: the video stream of the patient; and computer readable memory encoded with instructions that, when executed by the processor, cause the system to perform the method comprising [¶62]:
extracting video data, audio data, and semantic text data from a video stream configured to capture the patient (i.e., ingesting video, audio data); [¶36: “capturing audio and video data of a patient”]
Regarding [a], Roots does not explicitly disclose as disclosed by Muzammel semantic text data (i.e., extracting speech transcriptions associated with a patient based on speech recognition7); [Page 5].
Roots discloses:
analyzing the video data to identify a first feature set of video features identified by a computer-implemented machine-learning engine as being indicative of at least one health outcome of a set of health outcomes corresponding to a patient classification of the patient (i.e., wherein video features identified include a digital biomarker, eye direction corresponding to gaze); [¶50]
Roots discloses:
analyzing the audio data to identify a second feature set of audio features identified by the computer-implemented machine-learning engine as being indicative of at least one health outcome of the set of health outcomes corresponding to the patient classification of the patient (i.e., audio features identified include pitch8); [¶36]
Regarding [d]-[e], Roots does not explicitly disclose as disclosed by Muzammel:
analyzing the semantic text data to identify a third feature set of semantic text features identified by the computer-implemented machine-learning engine as being indicative of at least one health outcome of the set of health outcome corresponding to the patient classification of the patient (i.e., wherein identified features include audio transcriptions comprise spoken words9 which may include emotional content); [Page 5, section 4.1.2.]
Regarding [e], Roots discloses: feature combinations of two features (i.e., having combinations of audio and video biomarker features10). [¶50]
Regarding [e], Roots does not explicitly disclose as disclosed by Muzammel:
predicting the predicted health outcome of the patient based on the first, second and/or third features sets, wherein the predicted health outcome is predicted using a computer-implemented machine-learning model generated by the computer-implemented machine-learning engine (i.e., learning multimodal features corresponding to feature concatenation11 associated with adjustment of model coefficients associated with feature concatenation associated with multimodal features of mental ailments); [Page 3, 11, 15, 18: adaptively learning weights associated with aiding clinicians in accurate diagnosis and patient monitoring]
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Roots, including mechanism(s) [a]-[f], as taught by Muzammel. One of ordinary skill would have been so motivated to monitor employ mechanism(s) to facilitate optimal strategies for fusing multimodal features for depression detection and assessment. [Pages 1-15]
Roots discloses:
reporting the predicted health outcome (i.e., providing predictions to a provider via an EHR, consistent with Applicant Specification12); [FIG 1, ¶¶27, 28: a “medical professional may have access” to “user’s medical records” directed to a “treatment trajectory prediction” associated with facilitating the medical professional’s “diagnosis or treatment decision with relatively less time”]
Regarding claim(s) 2, 15, Roots-Muzammel as a combination discloses: The method of claim 1, Roots disclosing: wherein reporting the predicted health outcome includes: automatically reporting the predicted health outcome to a digital medical record associated with the patient (i.e., providing predictions to a provider via an EHR). [FIG 1, ¶¶27, 28: a “medical professional may have access” to “user’s medical records” directed to a “treatment trajectory prediction” associated with facilitating the medical professional’s “diagnosis or treatment decision with relatively less time”]
Regarding claim(s) 3, 16, Roots-Muzammel as a combination discloses: The method of claim 1, Roots disclosing: The system of claim 14, wherein reporting the predicted health outcome includes: automatically reporting the predicted health outcome to a medical care facility, in which the patient is being cared (i.e., providing predictions to a provider via an EHR). [FIG 1, ¶¶27, 28: a “medical professional may have access” to “user’s medical records” directed to a “treatment trajectory prediction” associated with facilitating the medical professional’s “diagnosis or treatment decision with relatively less time”]
Regarding claim(s) 4, 17, Roots-Muzammel as a combination discloses: The method of claim 1, Roots disclosing: The method of claim 1, The system of claim 14, wherein reporting the predicted health outcome includes: automatically reporting the predicted health outcome to a medical doctor who is caring for the patient (i.e., providing predictions to a provider via an EHR). [FIG 1, ¶¶27, 28: a “medical professional may have access” to “user’s medical records” directed to a “treatment trajectory prediction” associated with facilitating the medical professional’s “diagnosis or treatment decision with relatively less time”]
Regarding claim(s) 5, 18, Roots-Muzammel as a combination discloses: The method of claim 1, The method of claim 1, The system of claim 14, wherein the computer-implemented machine-learning model has been trained to predict the predicted health care outcome, training of the computer-implemented machine-learning model includes:
Roots disclosing:
extracting training video data, training audio data, and training semantic text data from a plurality of training video streams of a corresponding plurality of training patients (i.e., employing a patient reading a script or answering relatively easy questions to compile data); [¶36]
Regarding [a], Roots does not explicitly disclose as disclosed by Muzammel semantic text data (i.e., extracting speech transcriptions associated with a patient based on speech recognition); [Page 5].
Roots discloses:
analyzing the training video data to identify a first training feature set of video features (i.e., wherein video features identified include a digital biomarker, eye direction corresponding to gaze); [¶50]
Regarding [a]-[e], Roots discloses predicting health outcomes (i.e., comprising remission, response, nonresponse); [¶16]
Roots discloses:
analyzing the training audio data to identify a second training feature set of audio features (i.e., audio features identified include pitch13); [¶36]
Regarding [d], Roots does not explicitly disclose as disclosed by Muzammel:
analyzing the training semantic text data to identify a third training feature set of semantic text features (i.e., wherein identified features include audio transcriptions comprise spoken words14 which may include emotional content); [Page 5, section 4.1.2.]
Regarding [e], Roots discloses: feature combinations of two features (i.e., having combinations of audio and video biomarker features15). [¶50]
Regarding [e], Roots does not explicitly disclose as disclosed by Muzammel:
receiving a plurality of known training health outcomes corresponding to each of the plurality of training patients captured in the plurality of training video streams; and determining general model coefficients of the computer-implemented machine- learning model, such general model coefficients determined so as to improve a correlation between a plurality of known training health outcomes and a plurality of training patient health outcomes as determined by the computer-implemented machine-learning model (i.e., learning multimodal features corresponding to feature concatenation16 associated with adjustment of model coefficients associated with feature concatenation associated with multimodal features of mental ailments); [Page 3, 11, 15, 18: adaptively learning weights associated with aiding clinicians in accurate diagnosis and patient monitoring]
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Roots, including mechanism(s) [a]-[e], as taught by Muzammel. One of ordinary skill would have been so motivated to monitor employ mechanism(s) to facilitate optimal strategies for fusing multimodal features for depression detection and assessment. [Pages 1-15]
Regarding claim(s) 6, 19, Roots-Muzammel as a combination discloses: The method of claim 2, Muzammel disclosing [a]: wherein training of the computer-implemented machine-learning model further includes: selecting model features from the first, second, and third feature sets, the model features selected as being indicative of the known training health outcomes corresponding to the plurality of training patients captured in the plurality of training video streams (i.e., learning multimodal features corresponding to feature concatenation17 associated with adjustment of model coefficients associated with feature concatenation associated with multimodal features of mental ailments). [Page 3, 11, 15, 18: adaptively learning weights associated with aiding clinicians in accurate diagnosis and patient monitoring]
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Roots, including mechanism(s) [a], as taught by Muzammel. One of ordinary skill would have been so motivated to monitor employ mechanism(s) to facilitate optimal strategies for fusing multimodal features for depression detection and assessment. [Pages 1-15]
Regarding claim(s) 7, 20, Roots-Muzammel as a combination discloses: The method of claim 2, Roots disclosing: wherein the video stream of the patient is added to the plurality of training videos along with a known health outcome of the patient (i.e., collecting video data collected from a plurality of remote patients). [¶13: “utilizing data collected … plurality of remote patients”]
Regarding claim(s) 8, Roots-Muzammel as a combination discloses: The method of claim 1, Muzammel disclosing: wherein the first feature set includes metrics related to:
a number of times a first video feature occurs;
a frequency of occurrences of the first video feature (i.e., gaze18); [Page 3]
a time period between occurrences of the first video feature; and/or
a time period between occurrences of the first video feature and a second video feature;
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Roots, including mechanism(s) [b], as taught by Muzammel. One of ordinary skill would have been so motivated to monitor employ mechanism(s) to facilitate optimal strategies for fusing multimodal features for depression detection and assessment. [Pages 1-15]
Regarding claim(s) 9, Roots-Muzammel as a combination discloses: The method of claim 1, Roots disclosing: wherein the second feature set includes metrics related to:
a number of times a first audio feature occurs;
a frequency of occurrences of the first audio feature;
a time period between occurrences of the first audio feature (i.e., an audio biomarker such as inter-word pause length); [¶50] and/or
a time period between occurrences of the first audio feature and a second audio feature;
Regarding claim(s) 10, Roots-Muzammel as a combination discloses: The method of claim 1, Muzammel disclosing: wherein the third feature set includes metrics related to:
a number of times a first semantic text feature occurs (i.e., the number of times a particular spoken word transcription of a word appears); [Page 5, section 4.1.2. related to a phrase “which car[r]ies significant information about the depressive state of the participant”]
a frequency of occurrences of the first semantic text feature;
a time period between occurrences of the first semantic text feature; and/or
a time period between occurrences of the first semantic text feature and a second semantic text feature;
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Roots, including mechanism(s) [a], as taught by Muzammel. One of ordinary skill would have been so motivated to monitor employ mechanism(s) to facilitate optimal strategies for fusing multimodal features for depression detection and assessment. [Pages 1-15]
Regarding claim(s) 11, Roots-Muzammel as a combination discloses: The method of claim 1, Roots disclosing: further comprising: generating a fourth feature set that includes feature combinations of at least two of:
a video feature, an audio feature, and a semantic text feature (i.e., having combinations of audio and video biomarker features19). [¶50]
Regarding claim(s) 12, Roots-Muzammel as a combination discloses: The method of claim 8, Muzammel disclosing: wherein the fourth feature set includes metrics related to:
a number of times feature combination occurs;
a frequency of occurrences of the feature combination (i.e., gaze20); [Page 3]
a time period between occurrences of the feature combination; and/or
a time period between occurrences of a first feature combination and a second feature combination;
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Roots, including mechanism(s) [b], as taught by Muzammel. One of ordinary skill would have been so motivated to monitor employ mechanism(s) to facilitate optimal strategies for fusing multimodal features for depression detection and assessment. [Pages 1-15]
Regarding claim(s) 13, Roots-Muzammel as a combination discloses: The method of claim 1, Muzammel disclosing: wherein the set of alerting behaviors includes a mental state of the patient, the mental state is based on the first feature set, the second feature set, the third feature set, and a multidimensional mental-state model, wherein:
the multidimensional mental-state model includes a first dimension, a second dimension, and a third dimension (i.e., wherein states are characterized by multimodal features corresponding to feature concatenation21); [Page 4]
the first dimension corresponds to a first aspect of mental state (i.e., wherein states are characterized by multimodal features corresponding to feature concatenation22); [Page 4]
the second dimension corresponds to a second aspect of mental state (i.e., wherein states are characterized by multimodal features corresponding to feature concatenation23); [Page 4] and
the third dimension corresponds to a third aspect of mental state (i.e., wherein states are characterized by multimodal features corresponding to feature concatenation24); [Page 4]
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Roots, including mechanism(s) [a]-[d], as taught by Muzammel. One of ordinary skill would have been so motivated to monitor employ mechanism(s) to facilitate optimal strategies for fusing multimodal features for depression detection and assessment. [Pages 1-15]
Conclusion
The prior art made of record25 and NOT relied upon is considered pertinent to applicant's disclosure: Poria.
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL EZEWOKO whose telephone number is 571 272 7850. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marc Jimenez can be reached on 571 272 4530. The fax phone number for the organization where this application or proceeding is assigned is 571-273-7850.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL I EZEWOKO/Primary Examiner, Art Unit 3682
1 As an example, regarding i, ii: With regard to i, Applicant’s disclosure, ¶59 not providing implementation details for how identification of features are provided beyond referring generally to feature extraction; similarly how ii is implemented is not provided, Applicant Specification ¶¶44-46, 69, 125, not provide a description for how limitation ii is implemented beyond generalities of employing machine learning and determining model coefficients through linear regression.
2 MPEP 2164.03: “The “predictability or lack thereof” in the art refers to the ability of one skilled in the art to extrapolate the disclosed or known results to the claimed invention.”
3 Vas-Cath Inc. v. Mahurkar, 935 F.2d at 1561: as depicted in the “Emphasis added”, “One may wonder what purpose a separate “written description” requirement serves, when the second paragraph of § 112 expressly requires that the applicant conclude his specification “with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention” One explanation is historical: the “written description” requirement was a part of the patent statutes at a time before claims were required. A case in point is Evans v. Eaton, 20 U.S. (7 Wheat.) 356, 5 L.Ed. 472 (1822), in which the Supreme Court affirmed the circuit court’s decision that the plaintiff’s patent was “deficient” and that the plaintiffs could not recover infringement thereunder. The patent laws then in effect, namely the Patent Act of 1793, did not require claims, but did require, in its 3d section, that the patent applicant “deliver a written description of his invention, and of the manner of using, or process of compounding, the same, in such full, clear and exact terms, as to distinguish the same from all things before known, and to enable any person skilled in the art or science of which it is a branch, or with which it is most nearly connected, to make, compound and use the same” (emphasis in original).
4 MPEP 2163
5 MPEP 2161.01
6 See Form 892: Non-Patent Literature
7 Consistent with Applicant Specification, ¶62
8 Consistent with Applicant Specification ¶61
9 Consistent with Applicant Specification ¶56
10 ¶50:
video biomarkers comprising but not limited to “facial expressions, eye directions [gaze] … or expressions that correspond to particular behaviors”;
audio biomarkers comprising but not limited to “measures of pitch, intonation”
11 Page 4 discusses “fusing multimodal features”
12 ¶122: “reported in a digital medical record associated with the patient”
13 Consistent with Applicant Specification ¶61
14 Consistent with Applicant Specification ¶56
15 ¶50:
video biomarkers comprising but not limited to “facial expressions, eye directions [gaze] … or expressions that correspond to particular behaviors”;
audio biomarkers comprising but not limited to “measures of pitch, intonation”
16 Page 4 discusses “fusing multimodal features”
17 Page 4 discusses “fusing multimodal features”
18 Gaze corresponds to a “gaze aversion rate”, a frequency as depicted on page 3, “gaze and pupil dilation [44,45]”
19 ¶50:
video biomarkers comprising but not limited to “facial expressions, eye directions [gaze] … or expressions that correspond to particular behaviors”;
audio biomarkers comprising but not limited to “measures of pitch, intonation”
20 Gaze corresponds to a “gaze aversion rate”, a frequency as depicted on page 3, “gaze and pupil dilation [44,45]”
21 Page 4 discusses “fusing multimodal features”, wherein feature components comprise AUs, gaze, head pose
22 Page 4 discusses “fusing multimodal features”, wherein feature components comprise AUs, gaze, head pose
23 Page 4 discusses “fusing multimodal features”, wherein feature components comprise AUs, gaze, head pose
24 Page 4 discusses “fusing multimodal features”, wherein feature components comprise AUs, gaze, head pose
25 Please see Form 892 for complete listing