DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
This is a response to Applicant’s amendment filed on 28 October 2025, wherein:
Claims 1-8 and 11-18 are amended.
Claims 10 and 20 are original.
Claims 9 and 19 are canceled.
Claims 1-8, 10-18, and 20 are pending.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994)
The disclosure of the prior-filed applications, US Provisional Application No. 63/602,710, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application.
In particular, the disclosure of the prior-filed applications fail to provide sufficient written description for “wherein the at least one device further comprises a motion sensor, wherein the motion sensor is configured for detecting at least one of a gesture and a movement, wherein the motion sensor is configured for generating the at least one response based on the detecting; detecting, using at least one sensor comprising at least one a pupilometer and a galvanic skin response (GSR) sensor, a physiological response of the at least one user for the at least one question; generating, using a processing device, at least one sensor data for the at least one question based on the detecting of the physiological response; analyzing, using the processing device, the at least one sensor data; determining, using the processing device, a validity of each of the at least one response based on the analyzing of the at least one sensor data; analyzing, using the processing device, the validity of each of the at least one response, wherein the validity indicates genuineness of the at least one response; analyzing, using the processing device, the at least one response using at least one algorithm; generating, using the processing device, at least one score for at least one metric based on the analyzing of the at least one response, and the analyzing of the validity of each of the at least one response, wherein the at least one metric comprises an Emotional Resilience and Motivation Quotient (ERMQ) metric, wherein the at least one score comprises an ERMQ score, wherein the at least one score comprises ERMQ score, wherein the ERMQ score ranges from 0 to 100, wherein the at least one score for the at least one metric quantifies a resilience capacity of the at least one user; generating, using the processing device, at least one resilience profile for the at least one user based on the at least one score for the at least one metric” in claims 1 and 11, “analyzing, using the processing device, the at least one response and the at least one resilience profile using at least one machine learning model, wherein the at least one machine learning model comprises at least one gradient-boosting decision tree model, wherein the at least one gradient-boosting decision tree model is trained on aggregated assessment data to generate personalize recommendation trailored to the at least one resilience profile of the at least one user; generating, using the processing device, at least one recommendation for the at least one user based on the analyzing of the at least one response and the at least one resilience profile using the at least one machine learning model, wherein the at least one recommendation comprises a personalized guidance for the at least one user” in claims 2 and 12, “wherein a training process associated with the at least one machine learning model evolves the at least one machine learning model to learn non-linear relationships and interactions between assessment attributes and optimal recommendations” in claims 3 and 13, “retrieving, using the storage device, at least one of a plurality of historical assessment data associated with a time duration after elapsing of the time duration; and performing, using the processing device, an incremental training of the at least one machine learning model using at least one of the plurality of historical assessment data, wherein the analyzing of the at least one response and the at least one resilience profile using the at least one machine learning model is further based on the performing of the incremental training of the at least one machine learning model” in claims 4 and 14, “retrieving, using the storage device, a plurality of responses for a plurality of prompts associated with a plurality of users; performing, using the processing device, a statistical modeling on the plurality of responses for determining a plurality of psychological skill attributes using a factor analysis; performing, using the processing device, a regression modeling on the plurality of psychological skill attributes for determining a weight for each of the plurality of psychological skill attributes; and generating, using the processing device, the at least one algorithm based on the performing of the statistical modeling and the performing of the regression modeling” in claims 5 and 15, “wherein the analyzing of the at least one response using the at least one algorithm comprises: evaluating a competency of the at least one user against each of the plurality of psychological skill attributes based on the at least one response; scoring each of the plurality of psychological skill attributes based on the evaluating; and computing a weighted average score for the plurality of psychological skill attributes based on the weight of each of the plurality of psychological skill attributes and the scoring, wherein the generating of the at least one score for the at least one metric is further based on the computing” in claims 6 and 16, “analyzing, using the processing device, the at least one data; determining, using the processing device, a context associated with the assessing the at least one user; modifying, using the processing device, the weight associated with at least one of the plurality of psychological skill attributes based on the context; and generating, using the processing device, a modified weight for at least one of the plurality of psychological skill attributes based on the modifying, wherein the computing of the weighted average score for the plurality of psychological skill attributes is further based on the modified weight of at least one of the plurality of psychological skill attributes” in claims 7 and 17, “analyzing, using the processing device, the at least one data; determining, using the processing device, a context associated with the assessing of the at least one user; and generating, using the processing device, the at least one prompt information for the at least one prompt based on the determining of the context” in claims 8 and 18, and “wherein the analyzing of the at least one response further comprises analyzing the at least one response using at least one behavioral model and at least one natural language processing (NLP) model, wherein the at least one behavioral model and the at least one NLP model are separately trained on a plurality of training responses, wherein the analyzing of the at least one response using the at least one behavioral model and the at least one NLP model comprises: obtaining at least one first output from the at least one behavioral model by inputting the at least one response to the at least one behavioral model; obtaining at least one second output from the at least one NLP model by inputting the at least one response to the at least one NLP model; and combining the at least one first output and the at least one second output, wherein the generating of the at least one score for the at least one metric is further based on the combining” in claims 10 and 20 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). In particular, the specification of the prior-filed applications, at best, merely recites similar language as the claims without providing any substantive description for the claimed limitations identified above for the same reasons that the instant specification also fails as identified in the rejections of the claims under 35 USC 112(a) below for the same claim limitations.
Thus, claims 1-8, 10-18, and 20 do not gain benefit of priority to US Application No. 63/602,710. Therefore, claims 1-8, 10-18, and 20 have an effective filing date of 11 July 2024.
Information Disclosure Statement
The information disclosure statement filed 11 July 2024 fails to comply with the provisions of 37 CFR §§ 1.97, 1.98 and MPEP § 609 because the list of references contains either one or more non-compliances with format requirements. According to § 1.98(b)(5):
"Each publication listed in an information disclosure statement must be identified by publisher, author (if any), title, relevant pages of the publication, date, and place of publication."
In view of the IDS Submissions of 11 July 2024
In particular, one or more references lack identification of relevant pages of the publication. Such identification ensures that the Office was informed of the specific portion to be considered, especially for voluminous works, and that it has received all identified pages. Moreover, in the case of voluminous works such as books and websites, failure to cite relevant pages or webpages presents a boundless search. Specific references to particular contents within these works by page number or similar indices are suggested.
The IDS submission cumulatively amounts to 35 Non-Patent Literature Documents that includes entire books and laws and judicial cases. This is clearly voluminous. The lack of explicit page numbers in numerous documents that sets forth subject matter relevant to the claimed invention provides a boundless search.
While the Applicant is charged with a duty to disclose pertinent documents and information pertaining to the patentability of the claimed invention, MPEP 2004(13) states:
It is desirable to avoid submission of long lists of documents if it can be avoided. Eliminate clearly irrelevant and marginally pertinent cumulative information. If a long list is submitted, highlight those documents which have been specifically brought to applicant's attention and/or are known to be of most significance. See Penn Yan Boats. Inc. v. Sea Lark Boats. Inc., 359 F. Supp. 948, 175 USPQ 260 (S.D. Fla. 1972), 479 F.2d 1338, 178 USPQ 577 (5th Cir.1973), cert. denied, 414 U.S. 874 (1974). But cf. Molins PLC v. Textron. Inc.,48 F.3d 1172, 33 SPQ2d 1823 (Fed. Cir. 1995)."
The submission presents both a long list of documents and lengthy documents, such that, in totality, the submission sets forth a voluminous burden.
The examiner acknowledges that 37 CFR 1.97 and 1.98 do not require that the information be material, rather they allow for submission of information regardless of its pertinence to the claimed invention. The examiner also acknowledged there is no requirement to explain the materiality of submitted references, however, the cloaking of a clearly relevant reference by inclusion in a long list of citations may not comply with Applicant's duty of disclosure, see Penn Yan Boats, Inc. v. Sea Lark Boats Inc., 359 F. Supp. 948, aff'd 479 F. 2d. 1338.
The information disclosure statement filed 11 July 2024 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. In particular, no copy was provided for non-patent literature documents cite no. 11 and 27. Additionally, non-patent literature documents cite no. 8 and 9 include illegible sections.
The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered.
Specification
The disclosure is objected to because of the following informalities:
The amended specification includes multiple amendments that are improperly marked. 37 CFR 1.121 which requires all amendments to be appropriately marked. The text of any deleted subject matter must be shown by being placed within double brackets if strike-through cannot be easily perceived (e.g., deletion of the number "4" must be shown as [[4]]). As an alternative to using double brackets, however, extra portions of text may be included before and after text being deleted, all in strike-through, followed by including and underlining the extra text with the desired change (e.g., number 14 as). See MPEP 714. It is particularly noted that it appears that Applicant is attempting to add or remove singular elements in some of these amendments without following these examples causing these amendments to be particularly difficult to perceive. Continuing to amend in this manner will result in such amendments not being entered.
Appropriate correction is required.
The use of at least the terms “Windows”, “Mac OS”, “Unix”, “Linux”, “Android”, which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. It is noted that Applicant has amended these terms to include incorrect symbols.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
Claims 1-8, 10-18, and 20 are objected to because of the following informalities:
At least amended claims 1, 5-7, 11, and 15-17 include multiple amendments that are improperly marked. 37 CFR 1.121 which requires all amendments to be appropriately marked. The text of any deleted subject matter must be shown by being placed within double brackets if strike-through cannot be easily perceived (e.g., deletion of the number "4" must be shown as [[4]]). As an alternative to using double brackets, however, extra portions of text may be included before and after text being deleted, all in strike-through, followed by including and underlining the extra text with the desired change (e.g., number 14 as). See MPEP 714. It is particularly noted that it appears that Applicant is attempting to add or remove singular elements in some of these amendments without following these examples causing these amendments to be particularly difficult to perceive.
Claims 1 and 11 are each missing the term “an” preceding “answer option” in “a selection of answer option”.
Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus objected to under the same rationale.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“the at least one device is configured for presenting the at least one prompt to at least one user based on the at least one prompt information” in claims 1 and 11.
“the motion sensor is configured for detecting at least one of a gesture and a movement” in claims 1 and 11.
“the motion sensor is configured for generating the at least one response based on the detecting” in claims 1 and 11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The text of those sections of Title 35, U.S. Code 112(b) not included in this action can be found in a prior Office action.
Claims 1-8, 11-18, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim limitation “the motion sensor is configured for detecting at least one of a gesture and a movement” in claims 1 and 11 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. In particular, the disclosure, at best, merely recites that the function is performed in results-based language without providing a description of the steps, calculations, or formulas for performing the claimed functionality. For instance, the only mention of a motion sensor is found in para. 191 of the specification which generically recites it as a variant of “at least one sensor” along with other variants “an image sensor, a microphone, a eye tracking sensor, etc.” without any description of what a motion sensor is. Furthermore, the disclosure is silent regarding a motion sensor detecting at least one of a gesture and a movement”. While the specification does recite that a gesture and a movement may be detected, they are only recited to be detected by the generically recited “at least one sensor”, not a motion sensor specifically. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Claim limitation “the motion sensor is configured for generating the at least one response based on the detecting” in claims 1 and 11 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. As this is interpreted to be a computer-implemented 35 USC 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 USC 112(b). See MPEP 2181(II)(B). In particular, the disclosure, at best, merely recites that the function is performed in results-based language without providing a description of the steps, calculations, or formulas for performing the claimed functionality. For example, at least para. 191 of the specification recites, in results-based language, that “the at least one device may be configured for generating the at least one response based on the detecting”, not the motion sensor. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Regarding claims 1 and 11, it is unclear how “each of the at least one question” and “at least one of the at least one question” further limits the claim. This language is grammatically incorrect. One of ordinary skill would not understand what “each” and “at least one”, respectively, are since “each” and “at least one” are supposed to precede a plural term, not the singular “at least one question”, particularly in the context of the respective claimed limitations which recite “a plurality of answer options for each of the at least one question” and “a selection of answer option from the plurality of answer options for at least one of the at least one question”. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, this language is construed as redundant such that both recite just “the at least one question”. Dependent claims 2-18, 10, 12-18 and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 1 and 11, it is unclear how the “at least the one sensor comprises at least one a pupilometer and a galvanic skin response (GSR) sensor” detects “a physiological response of the at least one user for the at least one question”. Not only is “at least the one sensor comprises at least one a pupilometer and a galvanic skin response (GSR) sensor” grammatically incorrect, it is further unclear how a pupilometer is incorporated to implement the function such that the process is performed. Regarding the grammatical error, it is unclear whether this language means “at least one of a pupilometer and a GSR sensor” or “at least one pupilometer and a GSR”. For the purposes of compact prosecution, the limitation is construed as the former. Regarding the pupilometer, the disclosure is silent regarding how a pupilometer is incorporated into the structural elements to implement the claimed process, particularly with respect to tracking physiological responses to any questions. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 1 and 11, it is unclear how “each of the at least one response” further limits the claim. This language is grammatically incorrect. One of ordinary skill would not understand what “each” is since “each” is supposed to precede a plural term, not the singular “at least one response”. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, this language is construed as redundant such that both recite just “the at least one question”. Dependent claims 2-18, 10, 12-18 and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 1 and 11, it is unclear what constitutes “at least one score for at least one metric”. The limitation later recites “wherein the at least one metric comprises an Emotional Resilience and Motivation Quotient (ERMQ) metric, wherein the at least one score comprises an ERMQ score, wherein the at least one score comprises ERMQ score”. For instance, “wherein the at least one score comprises an ERMQ score” and “wherein the at least one score comprises ERMQ score” are nearly identical causing it to be unclear how each further limits the claim or whether this is a redundant typo. Additionally, it is unclear whether an ERMQ metric is different from an ERMQ score or if they are the same thing. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, an ERMQ metric is construed as the same as an ERMQ score and “wherein the at least one score comprises an ERMQ score, wherein the at least one score comprises ERMQ score” is construed as a redundant typo. Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claims 2 and 12, it is unclear what constitutes “wherein the at least one gradient-boosting decision tree model is trained on aggregated assessment data to generate personalize recommendation trailored to the at least one resilience profile of the at least one user”. In particular, it is unclear what “generate personalize recommendation trailored to the at least one resilience profile” means. This language is particularly grammatically incorrect. It is unclear whether the verb is “generate” or “personalize”. If the former, “personalize” should be “a personalized”. It is also unclear what “trailored” is. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, “generate” is construed as the verb, “personalize” is construed as “personalized”, and “trailored” is construed as “tailored”. Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
The text of those sections of Title 35, U.S. Code 112(a) not included in this action can be found in a prior Office action.
Claims 1-8, 10-18, and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 1 and 11, the disclosure further fails to provide sufficient written description for “wherein the at least one device further comprises a motion sensor, wherein the motion sensor is configured for detecting at least one of a gesture and a movement, wherein the motion sensor is configured for generating the at least one response based on the detecting; detecting, using at least one sensor comprising at least one a pupilometer and a galvanic skin response (GSR) sensor, a physiological response of the at least one user for the at least one question; generating, using a processing device, at least one sensor data for the at least one question based on the detecting of the physiological response; analyzing, using the processing device, the at least one sensor data; determining, using the processing device, a validity of each of the at least one response based on the analyzing of the at least one sensor data; and analyzing, using the processing device, the validity of each of the at least one response, wherein the validity indicates genuineness of the at least one response” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. An applicant may show that an invention is complete by disclosure of sufficiently detailed, relevant identifying characteristics which provide evidence that inventor was in possession of the claimed invention, i.e., complete or partial structure, other physical and/or chemical properties, functional characteristics when coupled with a known or disclosed correlation between function and structure, or some combination of such characteristics. Enzo Biochem, 323 F.3d at 964, 63 USPQ2d at 1613 (quoting the Written Description Guidelines, 66 Fed. Reg. at 1106, n. 49, stating that "if the art has established a strong correlation between structure and function, one skilled in the art would be able to predict with a reasonable degree of confidence the structure of the claimed invention from a recitation of its function".)." Thus, the written description requirement may be satisfied through disclosure of function and minimal structure when there is a well-established correlation between structure and function." Id. See MPEP 2163(II)(A)(3). Claims may also lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). Regarding the detecting steps, the disclosure is silent regarding any meaningful description of how any sensor is integrated into the claimed system to perform the claimed functions. In fact, the specification merely recites unbounded lists of sensors intending to cover any conceivable sensor for detecting a physical response, a physiological response (claimed), an emotional response, and a neurological response of the at least one user without any indication towards a reduction to practice. This is further evidenced with respect to the generating and analyzing steps for which the specification, at best, merely recites similar language as the claim without providing any description of the steps, calculations, or formulas necessary to perform the claimed functionality. See, for example, at least Fig. 1, 8, and 10, as well as para. 40, 188, 191, 196, 202, 216, and 245 of the specification. For instance, para. 202 recites
“the physical response may include increased heart rate, tense muscles, changes in breathing patterns, etc. Further, the physiological response may include pupil dilation, sweating, digestive changes, etc. Further, the emotional response may include excitement, anxiety, confidence, doubt, etc. Further, the neurological response may include increased activation in decision-making areas, changes in brain waves, processing speed changes, etc. Further, the at least one sensor may include a heart rate monitor, an Electromyography (EMG) sensor, a respiration rate monitor, a pupilometer (for measuring pupil dilation), a galvanic skin response (GSR) sensor, a digestive activity sensor, a facial expression analysis software, a voice stress analysis software, an Electroencephalography (EEG) headset, a Functional Magnetic Resonance Imaging (FMRI) sensor, a reaction time measurement device, etc. Further, at 804, the method 800 may include generating, using the processing device, at least one sensor data for the at least one question based on the detecting. Further, at 806, the method 800 may include analyzing, using the processing device, the at least one sensor data. Further, at 808, the method 800 may include determining, using the processing device, a validity of each of the at least one response based on the analyzing of the at least one sensor data. Further, the validity may indicate whether the at least one response is genuine or not. Further, the validity may include a positive validity and a negative validity. Further, at 810, the method 800 may include analyzing, using the processing device, the validity of each of the at least one response. Further, the generating of the at least one score for the at least one metric associated with the at least one psychological skill may be based on the analyzing of the validity of each of the at least one response.”
This clearly identifies that the disclosure is silent regarding how any sensor is integrated into the claimed system to perform the claimed function as well as silent regarding any meaningful description for the generating, analyzing, and determining steps with regard to data from any particular sensor especially since the claimed invention amounts to a software application. With particular respect to the identified 35 USC 112(f) limitations “wherein the motion sensor is configured for detecting at least one of a gesture and a movement” and “wherein the motion sensor is configured for generating the at least one response based on the detecting”, the disclosure is silent, at least as identified above, regarding the motion sensor performing these functions. Thus, these are also new matter. Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claims 1, 2, 6, 11, 12, and 16, the disclosure further fails to provide sufficient written description for “analyzing, using the processing device, the at least one response using at least one algorithm; generating, using the processing device, at least one score for at least one metric based on the analyzing of the at least one response, and the analyzing of the validity of each of the at least one response, wherein the at least one metric comprises an Emotional Resilience and Motivation Quotient (ERMQ) metric, wherein the at least one score comprises an ERMQ score, wherein the at least one score comprises ERMQ score, wherein the ERMQ score ranges from 0 to 100, wherein the at least one score for the at least one metric quantifies a resilience capacity of the at least one user; generating, using the processing device, at least one resilience profile for the at least one user based on the at least one score for the at least one metric” in claims 1 and 11, “analyzing, using the processing device, the at least one response and the at least one resilience profile using at least one machine learning model, wherein the at least one machine learning model comprises at least one gradient-boosting decision tree model, wherein the at least one gradient-boosting decision tree model is trained on aggregated assessment data to generate personalize recommendation trailored to the at least one resilience profile of the at least one user; generating, using the processing device, at least one recommendation for building the competency of the at least one user in the at least one psychological skill based on the analyzing of the at least one response and the at least one resilience profile using the at least one machine learning model, wherein the at least one recommendation comprises a personalized guidance for the at least one user” in claims 2 and 12, “wherein the analyzing of the at least one response using the at least one algorithm comprises: evaluating a competency of the at least one user against each of the plurality of psychological skill attributes based on the at least one response; scoring each of the plurality of psychological skill attributes based on the evaluating; and computing a weighted average score for the plurality of psychological skill attributes based on the weight of each of the plurality of psychological skill attributes and the scoring, wherein the generating of the at least one score for the at least one metric is further based on the computing” in claims 6 and 16 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The specification merely recites similar language as the claim without providing sufficient description of the steps, calculations, or formulas necessary to perform the claimed functionality. See, for example, at least para. 38, 44, 45, 70-81, 83, 87, 153, 158-167, 190, 192, 193, 197-201, 207, 209, 210, 213, and 221 of the specification. Much of the specification reads like a compilation of advertisement articles as opposed to a written description of the claimed invention and of the manner and process of making and using the same. Furthermore, the specification merely recites that “proprietary” algorithms, metrics, scale, and weighting coefficients are used along with generic disclosures of the mere use of machine learning models without any meaningful descriptions of the machine learning models themselves. See, for example, at least para. 44, 63, 75, 76, 78, 81, 83, 190, and 225 of the specification. Applicant is reminded that one cannot receive a patent for a trade secret (e.g., undisclosed “proprietary algorithms”) as this fails the written description requirement of 35 USC 112(a). Dependent claims 2-8, 10, 12-18, and 20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 3-5 and 13-15, the disclosure further fails to provide sufficient written description for “wherein a training process associated with the at least one machine learning model evolves the at least one machine learning model to learn non-linear relationships and interactions between assessment attributes and optimal recommendations” in claims 3 and 13, “retrieving, using the storage device, at least one of a plurality of historical assessment data associated with a time duration after elapsing of the time duration; and performing, using the processing device, an incremental training of the at least one machine learning model using at least one of the plurality of historical assessment data, wherein the analyzing of the at least one response and the at least one resilience profile using the at least one machine learning model is further based on the performing of the incremental training of the at least one machine learning model” in claims 4 and 14 and “retrieving, using the storage device, a plurality of responses for a plurality of prompts associated with a plurality of users; performing, using the processing device, a statistical modeling on the plurality of responses for determining a plurality of psychological skill attributes using a factor analysis; performing, using the processing device, a regression modeling on the plurality of psychological skill attributes for determining a weight for each of the plurality of psychological skill attributes; and generating, using the processing device, the at least one algorithm based on the performing of the statistical modeling and the performing of the regression modeling” in claims 5 and 15 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). These two variants of data to use to train a machine learning model are the two main data sources to train a model – (1) using historical data of the user and (2) using population data. However, the specification merely recites similar language as the claim without providing sufficient description of the steps, calculations, or formulas necessary to perform the claimed training functionality itself. See, for example, at least para. 45, 76, 87, 197, 198, 203, 211, 212, 223 of the specification. Dependent claims 6, 7, 16, and 17 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claims 7, 8, 17 and 18, the disclosure further fails to provide sufficient written description for “analyzing, using the processing device, the at least one data; determining, using the processing device, a context associated with the assessing of the at least one user; modifying, using the processing device, the weight associated with at least one of the plurality of psychological skill attributes based on the context; and generating, using the processing device, a modified weight for at least one of the plurality of psychological skill attributes based on the modifying, wherein the computing of the weighted average score for the plurality of psychological skill attributes is further based on the modified weight of at least one of the plurality of psychological skill attributes” in claims 7 and 17, “analyzing, using the processing device, the at least one data; determining, using the processing device, a context associated with the assessing of the at least one user; and generating, using the processing device, the at least one prompt information for the at least one prompt based on the determining of the context” in claims 8 and 18 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The specification merely recites similar language as the claim without providing sufficient description of the steps, calculations, or formulas necessary to perform the claimed functionality. See, for example, at least para. 200, 201, 214, and 215 of the specification.
Regarding claims 10 and 20, the disclosure further fails to provide sufficient written description for “wherein the analyzing of the at least one response further comprises analyzing the at least one response using at least one behavioral model and at least one natural language processing (NLP) model, wherein the at least one behavioral model and the at least one NLP model are separately trained on a plurality of training responses, wherein the analyzing of the at least one response using the at least one behavioral model and the at least one NLP model comprises: obtaining at least one first output from the at least one behavioral model by inputting the at least one response to the at least one behavioral model; obtaining at least one second output from the at least one NLP model by inputting the at least one response to the at least one NLP model; and combining the at least one first output and the at least one second output, wherein the generating of the at least one score for the at least one metric is further based on the combining” in claims 10 and 20 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The specification merely recites similar language as the claim without providing sufficient description of the steps, calculations, or formulas necessary to perform the claimed functionality. See, for example, at least para. 203, 204, and 217 of the specification which merely recite that the functions are performed in results-based language. For instance, para. 203 recites that “the at least one behavior model may include Big Five Personality Model, Cognitive Behavioral Therapy (CBT) Models, Transactional Analysis (TA), Emotional Intelligence (EI) model, etc. Further, the at least one behavioral model may include a machine learning behavioral model. Further, the at least one behavioral model may include a psychological behavioral model, a physical behavioral model, a neurological behavioral model, an emotional behavioral model, etc. Further, the behavioral model comprises at least one machine learning model for the at least one user that identifies an anomaly in a behavior (a physical behavior, a psychological behavior, a neurological behavior, an emotional behavior, etc.) based on past behavioral patterns of the at least one user. Further, the at least one behavioral model may be trained using one or more behavioral characteristics of the at least one user. Further, the one or more behavioral characteristics may include a breathing rate, a heart rate, a pupil dilation, a sweating, a gesture, a movement, an expression, a facial expression, etc. Further, the at least one machine learning model may include a Bayesian hierarchical regression model for identifying the anomaly in the behavior.” This identifies that the disclosure is silent regarding any meaningful description for the performance of the claimed functionality beyond reciting that the functions are performed in results-based language. In particular, the disclosure is silent what constitutes a “Big Five Personality Model, Cognitive Behavioral Therapy (CBT) Models, Transactional Analysis (TA), Emotional Intelligence (EI) model, etc.” let alone any analysis of any behavioral characteristic or any “psychological behavioral model, a physical behavioral model, a neurological behavioral model, an emotional behavioral model, etc.” nor any meaningful description of implementing a machine learning model to perform such analysis.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code 101 not included in this action can be found in a prior Office action.
Claims 1-8, 10-18, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without including additional elements that are sufficient to amount to significantly more than the judicial exception itself.
Step 1
The instant claims are directed to a method and a product which fall under at least one of the four statutory categories (STEP 1: YES).
Step 2A, Prong 2
Independent claim 1 recites:
A method for facilitating assessing users, the method comprising:
transmitting, using a communication device, at least one prompt information of at least one prompt to at least one device, wherein the at least one device comprises at least one output device, wherein the at least one device is configured for presenting the at least one prompt to at least one user based on the at least one prompt information, wherein the at least one prompt information comprises at least one questionnaire. wherein the at least one questionnaire comprises at least one question and a plurality of answer options for each of the at least one question;
receiving, using the communication device, at least one response of the at least one user for the at least one prompt from the at least one device, wherein the at least one response comprises a selection of answer option from the plurality of answer options for at least one of the at least one question, wherein the at least one device further comprises a motion sensor, wherein the motion sensor is configured for detecting at least one of a gesture and a movement, wherein the motion sensor is configured for generating the at least one response based on the detecting;
detecting, using at least one sensor comprising at least one a pupilometer and a galvanic skin response (GSR) sensor, a physiological response of the at least one user for the at least one question;
generating. using a processing device, at least one sensor data for the at least one question based on the detecting of the physiological response;
analyzing, using the processing device, the at least one sensor data;
determining, using the processing device, a validity of each of the at least one response based on the analyzing of the at least one sensor data;
analyzing, using the processing device, the validity of each of the at least one response, wherein the validity indicates genuineness of the at least one response;
analyzing, using the processing device, the at least one response using at least one algorithm;
generating, using the processing device, at least one score for at least one metric based on the analyzing of the at least one response, and the analyzing of the validity of each of the at least one response, wherein the at least one metric comprises an Emotional Resilience and Motivation Quotient (ERMQ) metric, wherein the at least one score comprises an ERMQ score, wherein the at least one score comprises ERMQ score. wherein the ERMQ score ranges from 0 to 100, wherein the at least one score for the at least one metric quantifies a resilience capacity of the at least one user;
generating, using the processing device, at least one resilience profile for the at least one user based on the at least one score for the at least one metric;
transmitting, using the communication device, the at least one resilience profile to the at least one device; and
storing, using a storage device, at least one assessment data comprising the at least one response and the at least one score for the at least one metric, and the at least one resilience profile.
Independent claim 11 recites:
A system for facilitating assessing users, the system comprising:
a communication device configured for:
transmitting at least one prompt information of at least one prompt to at least one device, wherein the at least one device comprises at least one output device, wherein the at least one device is configured for presenting the at least one prompt to at least one user based on the at least one prompt information, wherein the at least one prompt information comprises at least one questionnaire, wherein the at least one questionnaire comprises at least one question and a plurality of answer options for each of the at least one question;
receiving at least one response of the at least one user for the at least one prompt from the at least one device, wherein the at least one response comprises a selection of answer option from the plurality of answer options for at least one of the at least one question, wherein the at least one device further comprises a motion sensor, wherein the motion sensor is configured for detecting at least one of a gesture and a movement, wherein the motion sensor is configured for generating the at least one response based on the detecting; and
transmitting at least one profile to the at least one device;
at least one sensor comprising at least one a pupilometer and a galvanic skin response (GSR) sensor is configured for detecting a physiological response of the at least one user for the at least one question;
a processing device communicatively coupled with the communication device, wherein the processing device is configured for:
generating at least one sensor data for the at least one question based on the detecting of the physiological response;
analyzing the at least one sensor data;
determining a validity of each of the at least one response based on the analyzing of the at least one sensor data;
analyzing the validity of each of the at least one response, wherein the validity indicates genuineness of the at least one response;
analyzing the at least one response using at least one algorithm;
generating at least one score for at least one metric based on the analyzing of the at least one response, and the analyzing of the validity of each of the at least one response, wherein the at least one metric comprises an Emotional Resilience and Motivation Quotient (ERMQ) metric. wherein the at least one score comprises an ERMQ score. wherein the at least one score comprises ERMQ score, wherein the ERMQ score ranges from 0 to 100, wherein the at least one score for the at least one metric quantifies a resilience capacity of the at least one user; and
generating the at least one resilience profile for the at least one user based on the at least one score for the at least one metric; and
a storage device communicatively coupled with the processing device, wherein the storage device is configured for storing at least one assessment data comprising the at least one response and the at least one score for the at least one metric, and the at least one resilience profile.
All of the foregoing underlined elements amount to the abstract idea grouping of a certain method of organizing human activity because it is managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) by merely collecting information, analyzing the information, and outputting the results of the collection and analysis. This also evidences that these elements also amount to the abstract idea grouping of mental processes because the claims, under their broadest reasonable interpretation, cover performance of the limitations in the mind (including observations, evaluations, judgments, and opinions) but for the recitation of generic computer components. See MPEP 2106.04(a)(2)(III)(C) - A Claim That Requires a Computer May Still Recite a Mental Process. Lastly, the analyzing, generating, performing, evaluating, scoring, and computing steps amount to the abstract idea grouping of mathematical concepts because they recite mathematical relationships and mathematical calculations as defined in MPEP 2106.05(a)(2)(I) which recites that a “mathematical relationship is a relationship between variables or numbers [that] may be expressed in words or using mathematical symbols” such as “organizing information and manipulating information through mathematical correlations” and that a “claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the ‘mathematical concepts’ grouping” because a “mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word ‘calculating’ in order to be considered a mathematical calculation. For example, a step of ‘determining’ a variable or number using mathematical methods or ‘performing’ a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation."
The dependent claims amount to merely further defining the judicial exception.
Therefore, the claims recite a judicial exception. (STEP 2A, PRONG 1: YES).
Step 2A, Prong 2
The judicial exception is not integrated into a practical application because the independent and dependent claims do not include additional elements that are sufficient to integrate the exception into a practical application under the considerations set forth in MPEP 2106.04(d). The elements of the claims above that are not underlined constitute additional elements.
The following additional elements, both individually and as a whole, merely generally link the judicial exception to a particular technological environment or field of use: transmitting and receiving using a communication device (claims 1 and 11), at least one device (claims 1 and 11), a motion sensor (claims 1 and 11), at least one sensor comprising at least one a pupilometer and a galvanic skin response (GSR) sensor (claims 1 and 11), a processing device (claims 1 and 11), a storage device (claims 1 and 11), at least one sensor (claims 9 and 19), and a system (claim 11). Although some of the claims recite computer components for performing at least some of the recited functions, these elements are recited at a high level of generality for performing their basic computer functions (i.e., collecting, processing, transmitting/receiving, storing, outputting data). This is evidenced by the lack of significant structure in the figures (i.e., Fig. 1, 9-12, 15, and 20 merely illustrate elements as non-descript black boxes and stock icons and while Fig. 2-8, 13, 14, and 16-19 illustrate the claimed invention as purely software) and the generic nature in which any structural items are described in the specification. See, for example, at least para. 31-34, 39-43, 47, 48, 66, and 244-254 of the specification which merely provide stock descriptions of generic computer hardware and software components in any generic arrangement and illustrate that the claimed invention is merely using a software application to cause a computer to implement the judicial exception. For instance, para. 48 explicitly identifies that the focus of the claimed invention is entirely on collecting information, analyzing the collected information, and outputting the results of the collection and analysis. Thus, the components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. With respect to the communication device and the storage device, the courts have recognized that mere receiving or transmitting data over a network and mere storing and retrieving information in memory, respectively, are insignificant extra-solution activity. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. For instance, the claimed mere use of machine learning and natural language processing (NLP), are not in and of themselves, specific rules let alone specific rules with specific characteristics that improve the functionality of the computer system. In the event that the machine learning and NLP limitations are considered additional elements, they do not improve computer functionality as they merely invoke the use of a computer or other machinery in its ordinary capacity to process information. Similarly, the motion sensor and the at least one sensor comprising at least one a pupilometer and a GSR sensor, as recited and organized, merely add insignificant extra-solution activity to the judicial exception (e.g., mere extra-solution data gathering in conjunction with a law of nature or abstract idea). None of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Again, this is evidenced by the manner in which these elements are disclosed in the drawings and specification as identified above. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the additional elements does not affect this analysis. See MPEP 2106.05(I) for more information on this point, including explanations from judicial decisions including Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 224-26 (2014). Additionally, the claims do not apply or use a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition nor do they apply or use a judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. For instance, the independent claims recite that the method and system are nebulously for “facilitating assessing users” while the disclosure briefly discusses a questionnaire towards emotional resilience and motivation. Thus, the claims and disclosure are silent regarding any treatment, let alone any actual treatment for any disease or medical condition. Accordingly, based on all of the considered factors, these additional elements do not integrate the abstract idea into a practical application. Therefore, the claims are directed to the judicial exception. (STEP 2A, PRONG 2: NO).
Step 2B
The independent and dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under the considerations set forth in MPEP 2106.05. As identified in Step 2A, Prong 2, above, the claimed product and the process it performs do not require the use of a particular machine, nor do they result in the transformation of an article. Although the claims recite components (identified in Step 2A, Prong 2) for performing at least some of the recited functions, these elements are recited at a high level of generality in a conventional arrangement for performing their basic computer functions (i.e., collecting, processing, transmitting/receiving, storing, outputting data). BASCOM Global Internet Servs. v. AT&T Mobility LLC (827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243-44 (2016)), Electric Power Group, LLC v. Alstom S.A. (830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016)). This is at least evidenced by the manner in which this is disclosed that indicates that Applicant believes the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 USC 112(a) as identified in Step 2A, Prong 2, above. Thus, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. This is evidenced by the drawings and the specification as identified in Step 2A, Prong 2, above. With respect to the communication device and the storage device, the courts have recognized that receiving or transmitting data over a network and storing and retrieving information in memory, respectively, are well-understood, routine, and convention functions when they are claimed in a merely generic manner (which they are in the instant claims, as well as disclosed) and as insignificant extra-solution activity. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. Thus, the focus of the claimed invention is on the analysis of the collected data, which is itself at best merely an improvement within the abstract idea. See pg. 2-3 in SAP America Inc. v. lnvestpic, LLC (890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018) which proffered “[w]e may assume that the techniques claimed are groundbreaking, innovative, or even brilliant, but that is not enough for eligibility. Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. The claims here are ineligible because their innovation is an innovation in ineligible subject matter. Their subject is nothing but a series of mathematical calculations based on selected information and the presentation of the results of those calculations. Furthermore, the steps are merely recited to be performed by, or using, the elements while the specification makes clear that the computerized system itself is ancillary to the claimed invention as identified above. This further identifies that none of the hardware offer a meaningful limitation beyond, at best, generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Viewed as a whole, these additional claim elements do not provide meaningful limitation to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea of itself (STEP 2B: NO).
Therefore, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code 103 not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-8, 10-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Angelopoulos et al. (US 2019/0189025, hereinafter referred to as Angelopoulos) in view of Samec et al. (US 2017/0323485, hereinafter referred to as Samec).
Regarding claims 1 and 11, Angelopoulos teaches a method (claim 1) and a system (claim 11) for facilitating assessing psychological skills of users, the method comprising:
transmitting, using a communication device, at least one prompt information of at least one prompt to at least one device (Angelopoulos, para. 17, “Client systems 114 enable users to interact with server systems 110 to receive interventions and provide responses thereto.”), wherein the at least one device comprises at least one output device, wherein the at least one device is configured for presenting the at least one prompt to at least one user based on the at least one prompt information, wherein the at least one prompt information comprises at least one questionnaire, wherein the at least one questionnaire comprises at least one question and a plurality of answer options for each of the at least one question (Angelopoulos, para. 40, "User response for the explicit preference score may be measured based on any suitable feedback from the user with respect to an intervention, such as ratings and questionnaires (e.g., inquiring about the types of insights liked, the behaviors interested in being changed, the manner of being notified, etc.");
receiving, using the communication device, at least one response of at least one user for the at least one prompt from the at least one device, wherein the at least one response comprises a selection of answer option from the plurality of answer options for at least one of the at least one question (Angelopoulos, para. 17, “Client systems 114 enable users to interact with server systems 110 to receive interventions and provide responses thereto.”), wherein the at least one device further comprises a motion sensor, wherein the motion sensor is configured for detecting at least one of a gesture and a movement, wherein the motion sensor is configured for generating the at least one response based on the detecting (Angelopoulos, para. 23 and 25 include multiple data sources that are reasonably construed as a motion sensor configured for detecting at least one of a gesture and a movement, including wearable devices with one or more sensors to measure various physiological conditions (exercising, resting, etc.), portable computing device (e.g., smartphone, tablet, etc.), image capturing device or camera to capture images of the individual (e.g., facial expressions, etc.), geospatial measurements (e.g., accelerometers, GPS sensors, etc.); para. 24, “movement analysis”);
detecting, using at least one sensor, a physiological response of the at least one user for the at least one question (Angelopoulos, para. 23, "The data sources may include: wearable devices with one or more sensors to measure various physiological conditions of the individual (e.g., pulse or heart rate, activities, distance traveled, blood pressure, body temperature, speech slurring, a time a person is sitting or otherwise inactive, etc.); a portable computing device (e.g., smartphone, tablet, etc.) providing various information (e.g., preferences, personal information, personal or other contacts, schedule of events or appointments, communications with other individuals, speech and/or speech slurring, etc.) pertaining to the individual; image capture device or camera to capture images of the individual (e.g., facial expressions, etc.); social media or other network sites providing social or other information pertaining to the individual (e.g., personal preferences, social or other contacts, postings by the individual, etc.).");
generating, using a processing device, at least one sensor data for the at least one question based on the detecting of the physiological response (Angelopoulos, para. 23, "The data sources may include: wearable devices with one or more sensors to measure various physiological conditions of the individual (e.g., pulse or heart rate, activities, distance traveled, blood pressure, body temperature, speech slurring, a time a person is sitting or otherwise inactive, etc.); a portable computing device (e.g., smartphone, tablet, etc.) providing various information (e.g., preferences, personal information, personal or other contacts, schedule of events or appointments, communications with other individuals, speech and/or speech slurring, etc.) pertaining to the individual; image capture device or camera to capture images of the individual (e.g., facial expressions, etc.); social media or other network sites providing social or other information pertaining to the individual (e.g., personal preferences, social or other contacts, postings by the individual, etc.).");
analyzing, using the processing device, the at least one sensor data (Angelopoulos, para. 23, "The information from the data sources is provided to behavior change module 120 for processing.");
determining, using the processing device, a validity of each of the at least one response based on the analyzing of the at least one sensor data (Angelopoulos, para. 27, "Engines 205,215,225 provide to behavior change engine 130 real-time information pertaining to the emotion, activity and context of the individual. The behavior change engine determines an intervention type, confidence scores or probabilities");
analyzing, using the processing device, the validity of each of the at least one response, wherein the validity indicates genuineness of the at least one response (Angelopoulos, para. 27, "Engines 205, 215, 225 provide to behavior change engine 130 real-time information pertaining to the emotion, activity and context of the individual. The behavior change engine determines an intervention type, confidence scores or probabilities").
analyzing, using the processing device, the at least one response using at least one algorithm (Angelopoulos, para. 23, “The provided information may reflect a response by the individual… The information from the data sources is provided to behavior change module 120 for processing.” Behavior change module, as a computerized element, includes at least one algorithm for analyzing the at least one response.);
generating, using the processing device, at least one score for at least one metric based on the analyzing of the at least one response, and the analyzing of the validity of each of the at least one response, wherein the at least one metric comprises an Emotional Resilience and Motivation Quotient (ERMQ) metric, wherein the at least one score comprises an ERMQ score, wherein the at least one score comprises ERMQ score, wherein the at least one score ranges from 0 to 100, wherein the at least one score for the at least one metric quantifies a resilience capacity of the at least one user (Angelopoulos, para. 24, “Behavior change module 120 includes a mental state detection engine 205, an activity detection engine 215, a context detection engine 225, and behavior change engine 130. The mental state detection engine analyzes information from data sources 285 and determines emotions of the individual over time. The mental state detection engine 255 may use any technique to estimate or determine what emotion(s) the individual is currently experiencing.” Para. 28, “The behavior change engine 130 includes a behavioral pattern learning engine 235, a behavioral goal evaluation engine 245, and an intervention engine 255. The behavioral pattern learning engine 235 in this embodiment analyzes the emotion, activity, and context of the individual; generates an initial user behavior profile 350; and subsequently updates that user behavior profile as described below (FIG. 3B).” Para. 29, “The behavioral goal evaluation engine analyzes the emotion, activity, and context of the individual and updates a performance profile of the individual pertaining to a measurement of performance over time (e.g., maintaining activities to induce behavior modification to achieve a desired health or life goal, etc.). The performance profile may be compared to goals for the induced behavior modification to indicate a status of the individual with respect to those goals. For example, the behavioral goal evaluation engine may indicate trends of the individual with respect to the goals (e.g., progress, regress, sustained, etc.).” At least para. 38 and 39 discuss scoring at least one metric quantifies a competency of the at least one user in the at least one psychological skill.);
generating, using the processing device, at least one resilience profile for the at least one user based on the at least one score for the at least one metric (Angelopoulos, para. 28, “generates an initial user behavior profile 350;… The user behavior profile basically learns how the user is feeling or functioning, and represents this in the user behavior profile based on responses by the individual to interventions.” Para. 29, “The behavioral goal evaluation engine analyzes the emotion, activity, and context of the individual and updates a performance profile of the individual pertaining to a measurement of performance over time (e.g., maintaining activities to induce behavior modification to achieve a desired health or life goal, etc.).”);
transmitting, using the communication device, the at least one resilience profile to the at least one device (Angelopoulos, para. 18, “The client systems 114 may present a graphical user (e.g., GUI, etc.) or other interface (e.g., command line prompts, menu screens, etc.)… may provide reports including analysis and behavior change results (e.g., progress towards attaining a goal, etc.).”); and
storing, using a storage device, at least one assessment data comprising the at least one response and the at least one score for the at least one metric, and the at least
one resilience profile (Angelopoulos, para. 17, “A database system 118 may store various information for the analysis (e.g., behavior profiles, population data, artifacts, intervention information, etc.). The database system 118 may be implemented by any conventional or other database or storage unit, may be local to or remote from server systems 110 and client systems 114,”).
Angelopoulos does not explicitly teach the at least one sensor comprising at least one a pupilometer and a galvanic skin response (GSR) sensor.
However, in an analogous art, Samec teaches the at least one sensor comprising at least one a pupilometer and a galvanic skin response (GSR) sensor (Samec, para. 341, The user sensors may then detect whether an expected involuntary physiological response (e.g., pupil dilation or sweating”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for the at least sensor used to detect a physiological response of the at least one user for the at least one question to comprise a pupilometer and/or a GSR sensor as taught by Samec because “the accuracy of assessment and prediction of user states and conditions may increase due to… the variety of the data,… and the ability to collect multiple types of data simultaneously (thereby allowing different data to be cross-referenced, e.g., using time stamps and/or location stamps applied to all of the data), among other benefits, may increase the accuracy of any health analysis performed using user data… and may reveal relationships between health conditions or treatments and various measured variables that are otherwise not readily apparent.” See Samec at para. 221.
Regarding claims 2 and 12, Angelopoulos teaches the method of claim 1 and the system of claim 11 further comprising:
analyzing, using the processing device, the at least one response and the at least one resilience profile using at least one machine learning model, wherein the at least one machine learning model comprises at least one gradient-boosting decision tree model, wherein the at least one gradient-boosting decision tree model is trained on aggregated assessment data to generate personalize recommendation trailored to the at least one resilience profile of the at least one user (Angelopoulos, para. 14, “Present invention embodiments employ a repository of data/knowledge of populations, cohorts, and individuals. Analysis of this data results in insights that are used to determine which individual is selected to induce behavior to achieve a desired health or life goal, when is the optimal time for an intervention for an individual, and how is the individual influenced for an optimal outcome.” Para. 22, “feedback loop is utilized for continuous machine learning of the interventions that are successful for changing behavior to achieve a desired health or life goal at flow 236. These learning models are described in more detail below and may be implemented using, for example, k-nearest neighbor, learned decision trees, matrix factorization, neural networks, and/or Bayesian classifiers techniques. In addition, the learning models may be implemented by a Watson system (developed by International Business Machines Corporation) that uses machine learning functionalities and algorithms to learn about the interventions and derive heuristic information governing the intervention selection.” Para. 41 and 43, “The learning models may receive inputs, such as the positive and negative preferences, etc., and be trained to provide an explicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new preference inputs.”);
generating, using the processing device, at least one recommendation for the at least one user based on the analyzing of the at least one response and the at least one resilience profile using the at least one machine learning model (Angelopoulos, para. 22, “feedback loop is utilized for continuous machine learning of the interventions that are successful for changing behavior to achieve a desired health or life goal at flow 236.”); and
transmitting, using the communication device, the at least one recommendation to the at least one device (Angelopoulos, para. 17, “Scheduler module 116 schedules transmission of an intervention based on information from behavior change module 120.”).
Regarding claims 3 and 13, Angelopoulos teaches the method of claim 2 and the system of claim 12, wherein a training process associated with the at least one machine learning model evolves the at least one machine learning model to learn non-linear relationships and interactions between assessment attributes and optimal recommendations (Angelopoulos, para. 51, “The behavior and decision models may initially be simplistic, and subsequently become more complex or evolve (e.g., based on learning, feedback, etc.) over time.”).
Angelopoulos does not explicitly teach wherein the at least one machine learning model is an ensemble of at least 100 decision trees, wherein a maximum decision tree depth for the at least one machine learning model is at least 15, wherein the ensemble of at least 100 decision trees is trained using grid search hyperparameter optimization.
However, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention for the ensemble of decision tress in the at least one machine learning model of Angelopoulos to be at least 100 decision trees, wherein a maximum decision tree depth for the at least one machine learning model is at least 15 since it has been held to be within the general skill of a worker in the art to select a minimum number of decision trees and a maximum decision tree depth on the basis of their suitability for the intended use is a matter of obvious design choice.
Regarding claims 4 and 14, Angelopoulos teaches the method of claim 2 and the system of claim 12 further comprising:
retrieving, using the storage device, at least one of a plurality of historical assessment data associated with a time duration after elapsing of the time duration (Angelopoulos, Fig. 2A, Personal Knowledge and Data 260; para. 14, “The interventions may be dynamically changed based on a context of the user, and the effectiveness of the intervention may be determined or updated from the immediate behavioral response to the intervention and from historical behaviors of the user.”); and
performing, using the processing device, an incremental training of the at least one machine learning model using at least one of the plurality of historical assessment data, wherein the analyzing of the at least one response and the at least one resilience profile using the at least one machine learning model is further based on the performing of the incremental training of the at least one machine learning model (Angelopoulos, para. 14, “A feedback loop is employed for continuous learning and optimization of successful and unsuccessful interventions.”).
Regarding claims 5 and 15, Angelopoulos teaches the method of claim 1 and the system of claim 11 further comprising:
retrieving, using the storage device, a plurality of responses for a plurality of prompts associated with a plurality of users (Angelopoulos, Fig. 2A, Population Knowledge and Data 250);
performing, using the processing device, a statistical modeling on the plurality of responses for determining a plurality of psychological skill attributes using a factor analysis (Angelopoulos, para. 38, “A user behavior profile may be initialized for a user based on behavior profiles of other similar users (e.g., using clustering, collaborative-based filtering, etc.). For example, similarity metrics (e.g. Euclidean distance, cosine similarity, etc.) may be determined to compare a target user against all other users (e.g., from population repository 265) based on demographic data (e.g. age, gender, occupation, education level, medical conditions, etc.).”);
performing, using the processing device, a regression modeling on the plurality of psychological skill attributes for determining a weight for each of the plurality of psychological skill attributes (Angelopoulos, para. 38, “A user behavior profile may be initialized for a user based on behavior profiles of other similar users (e.g., using clustering, collaborative-based filtering, etc.). For example, similarity metrics (e.g. Euclidean distance, cosine similarity, etc.) may be determined to compare a target user against all other users (e.g., from population repository 265) based on demographic data (e.g. age, gender, occupation, education level, medical conditions, etc.).”); and
generating, using the processing device, the at least one algorithm based on the performing of the statistical modeling and the performing of the regression modeling (Angelopoulos, para. 41, “By way of example, an explicit preference score may be determined by adding positive preferences (e.g., thumbs up, high star rating, etc.) and subtracting negative preferences (e.g., thumbs down, low star rating, etc.) toward an intervention purpose. The positive and negative preferences may be combined in a weighted manner (e.g., assigning any desired weights to the preferences). In addition, learning models may be employed to determine the explicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative preferences, etc., and be trained to provide an explicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new preference inputs.” Para. 43, “By way of example, an implicit preference score may be determined by adding occurrences of positive indicators (e.g., goal progress, sufficient viewing duration, fast response to intervention, etc.) and subtracting occurrences of negative indicators (e.g., goal regression, minimal viewing duration, non-responsive to interventions, etc.) toward an intervention purpose. The positive and negative indicators may be combined in a weighted manner (e.g., assigning any desired weights to the indicators, etc.). In addition, learning models may be employed to determine the implicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative indicators, etc., and be trained to provide an implicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new indicator inputs.“).
Regarding claims 6 and 16, Angelopoulos teaches the method of claim 5 and the system of claim 15, wherein the analyzing of the at least one response using the at least one algorithm comprises:
evaluating a competency of the at least one user against each of the plurality of psychological skill attributes based on the at least one response (Angelopoulos, para. 41, “By way of example, an explicit preference score may be determined by adding positive preferences (e.g., thumbs up, high star rating, etc.) and subtracting negative preferences (e.g., thumbs down, low star rating, etc.) toward an intervention purpose. The positive and negative preferences may be combined in a weighted manner (e.g., assigning any desired weights to the preferences). In addition, learning models may be employed to determine the explicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative preferences, etc., and be trained to provide an explicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new preference inputs.” Para. 43, “By way of example, an implicit preference score may be determined by adding occurrences of positive indicators (e.g., goal progress, sufficient viewing duration, fast response to intervention, etc.) and subtracting occurrences of negative indicators (e.g., goal regression, minimal viewing duration, non-responsive to interventions, etc.) toward an intervention purpose. The positive and negative indicators may be combined in a weighted manner (e.g., assigning any desired weights to the indicators, etc.). In addition, learning models may be employed to determine the implicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative indicators, etc., and be trained to provide an implicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new indicator inputs.“);
scoring each of the plurality of psychological skill attributes based on the evaluating (Angelopoulos, para. 41, “By way of example, an explicit preference score may be determined by adding positive preferences (e.g., thumbs up, high star rating, etc.) and subtracting negative preferences (e.g., thumbs down, low star rating, etc.) toward an intervention purpose. The positive and negative preferences may be combined in a weighted manner (e.g., assigning any desired weights to the preferences). In addition, learning models may be employed to determine the explicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative preferences, etc., and be trained to provide an explicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new preference inputs.” Para. 43, “By way of example, an implicit preference score may be determined by adding occurrences of positive indicators (e.g., goal progress, sufficient viewing duration, fast response to intervention, etc.) and subtracting occurrences of negative indicators (e.g., goal regression, minimal viewing duration, non-responsive to interventions, etc.) toward an intervention purpose. The positive and negative indicators may be combined in a weighted manner (e.g., assigning any desired weights to the indicators, etc.). In addition, learning models may be employed to determine the implicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative indicators, etc., and be trained to provide an implicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new indicator inputs.“); and
computing a weighted average score for the plurality of psychological skill attributes based on the weight of each of the plurality of psychological skill attributes and the scoring, wherein the generating of the at least one score for the at least one metric is further based on the computing (Angelopoulos, para. 41, “By way of example, an explicit preference score may be determined by adding positive preferences (e.g., thumbs up, high star rating, etc.) and subtracting negative preferences (e.g., thumbs down, low star rating, etc.) toward an intervention purpose. The positive and negative preferences may be combined in a weighted manner (e.g., assigning any desired weights to the preferences). In addition, learning models may be employed to determine the explicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative preferences, etc., and be trained to provide an explicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new preference inputs.” Para. 43, “By way of example, an implicit preference score may be determined by adding occurrences of positive indicators (e.g., goal progress, sufficient viewing duration, fast response to intervention, etc.) and subtracting occurrences of negative indicators (e.g., goal regression, minimal viewing duration, non-responsive to interventions, etc.) toward an intervention purpose. The positive and negative indicators may be combined in a weighted manner (e.g., assigning any desired weights to the indicators, etc.). In addition, learning models may be employed to determine the implicit preference score (e.g., k-nearest neighbor, learned decision trees, matrix factorization, neural networks, etc.). The learning models may receive inputs, such as the positive and negative indicators, etc., and be trained to provide an implicit preference score as output based on an initial training set. The learning models may dynamically be updated based on new indicator inputs.“).
Regarding claims 7 and 17, Angelopoulos teaches the method of claim 6 and the system of claim 16 further comprising:
obtaining, using the processing device, at least one data associated with the at least one user (Angelopoulos, para. 26, “Context detection engine 225 analyzes information from data sources 285”);
analyzing, using the processing device, the at least one data (Angelopoulos, para. 26, “Context detection engine 225 analyzes information from data sources 285”);
determining, using the processing device, a context associated with the assessing of the at least one user (Angelopoulos, para. 26, “Context detection engine 225… determines contexts of the individual over time.”);
modifying, using the processing device, the weight associated with at least one of the plurality of psychological skill attributes based on the context (Angelopoulos, para. 29, “The behavioral goal evaluation engine analyzes the emotion, activity, and context of the individual and updates a performance profile of the individual pertaining to a measurement of performance over time (e.g., maintaining activities to induce behavior modification to achieve a desired health or life goal, etc.).”); and
generating, using the processing device, a modified weight for at least one of the plurality of psychological skill attributes based on the modifying, wherein the computing of the weighted average score for the plurality of psychological skill attributes is further based on the modified weight of at least one of the plurality of psychological skill attributes (Angelopoulos, para. 29, “The behavioral goal evaluation engine analyzes the emotion, activity, and context of the individual and updates a performance profile of the individual pertaining to a measurement of performance over time (e.g., maintaining activities to induce behavior modification to achieve a desired health or life goal, etc.).”).
Regarding claims 8 and 18, Angelopoulos teaches the method of claim 1 and the system of claim 11 further comprising:
obtaining, using the processing device, at least one data associated with the at least one user (Angelopoulos, para. 26, “Context detection engine 225 analyzes information from data sources 285”);
analyzing, using the processing device, the at least one data (Angelopoulos, para. 26, “Context detection engine 225 analyzes information from data sources 285”);
determining, using the processing device, a context associated with the assessing of the at least one user (Angelopoulos, para. 26, “Context detection engine 225… determines contexts of the individual over time.”); and
generating, using the processing device, the at least one prompt information for the at least one prompt based on the determining of the context (Angelopoulos, para. 30, “The intervention engine analyzes the emotion, activity and context of the individual, the behavior profile, and the performance profile to determine an appropriate and personalized intervention for the individual”).
Regarding claims 10 and 20, Angelopoulos teaches the method of claim 1 and the system of claim 11, wherein the analyzing of the at least one response further comprises analyzing the at least one response using at least one behavioral model and at least one natural language processing (NLP) model, wherein the at least one behavioral model and the at least one NLP model are separately trained on a plurality of training responses (Angelopoulos, para. 24, “natural language processing (NLP) techniques may analyze textual information from the individual to determine the sentiment or mood of textual information of the individual, such as IBM's Watson Message Sentiment services.”), wherein the analyzing of the at least one response using the at least one behavioral model and the at least one NLP model comprises:
obtaining at least one first output from the at least one behavioral model by inputting the at least one response to the at least one behavioral model (Angelopoulos, para. 24, “The mental state detection engine analyzes information from data sources 285 and determines emotions of the individual over time. The mental state detection engine 255 may use any technique to estimate or determine what emotion(s) the individual is currently experiencing.”);
obtaining at least one second output from the at least one NLP model by inputting the at least one response to the at least one NLP model (Angelopoulos, para. 24, “natural language processing (NLP) techniques may analyze textual information from the individual to determine the sentiment or mood of textual information of the individual, such as IBM's Watson Message Sentiment services.”); and
combining the at least one first output and the at least one second output, wherein the generating of the at least one score for the at least one metric is further based on the combining (Angelopoulos, para. 24, “individual's emotion may be… refined from textual information.” Refining from textual information explicitly identifies that the NLP output is combined with the behavioral model output.).
Response to Arguments
Applicant's arguments with respect to the specification objections have been fully considered but they are not persuasive. While the amendments obviate some of the objections, they do not obviate others. Additionally, the amendments introduce new objections. Furthermore, Applicant is reminded that the lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Applicant's arguments with respect to the rejection of the claims under 35 USC 112(b) have been fully considered. The amendments obviate the rejection. Thus, this rejection has been withdrawn. However, Applicant is directed to the rejections above which address the amendments.
Applicant's arguments with respect to the rejections of the claims under 35 USC 112(a) have been fully considered but they are not persuasive. Applicant asserts that claims 1 and 11 have been amended to overcome the rejections.
Examiner is not persuaded. Applicant is directed to the rejections above which have been updated to address the amendments to the claims.
Applicant also asserts that para. 44, 45, 75, 76, 78, 81, and 190 of the specification provide support for the amendments.
Examiner is not persuaded. At least para. 75, 76, 78, 81, and 190 of the specification are explicitly identified as insufficient in the rejections.
In pg. 23, Applicant further asserts that the specification does not merely recite the algorithm but provides a process through which this algorithm is developed which is an iterative process of statistical modeling on response data from over 800 subjects and factor analysis is used to determine six core underlying resilience attributes - self-efficacy, coping skills, adaptability, perseverance, purpose, and relationships for generating ERMQ score. Here, Applicant points to para. 45 of the specification.
Examiner is not persuaded. This is merely a conclusory statement made without substantive support, and is not persuasive. Further, para. 45 merely recites similar conclusory language without any meaningful description to satisfy 35 USC 112(a).
In pg. 23-25, Applicant asserts that claims 1 and 11 do not specify a desired result but recite a procedure and that the specification describes the corresponding workflow for amended claims 1 and 11 in procedural terms, and thus satisfy the written description requirement of 35 USC 112(a).
Examiner is not persuaded. Applicant ignores that claims are rejected for specific limitations and then further misconstrues the identification that the disclosure merely recites that claimed functions recited in the specific limitations are performed in results-based language as an assertion that the claim specifies a desired result. The identification that the disclosure merely recites that a function is performed in results-based language is an identification that the disclosure does not describe in any meaningful terms how a function is performed as required by 35 USC 112(a). As identified in the rejections, much of the claimed functions involve the implementation of algorithms. At least MPEP 2161.01(I) identifies that the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. As the disclosure fails to provide this, Applicant has also failed the written description requirement of 35 USC 112(a).
In pg. 25-26, Applicant asserts that claims 2 and 12 are amended to overcome the rejection and that para. 83, 87, 164, and 221 provide support.
Examiner is not persuaded. Applicant is directed to the rejection which has been updated to address the amendments to the claims. Furthermore, para. 83, 87, 164, and 221 are explicitly identified as insufficient.
In pg. 27, Applicant asserts that claims 2 and 12 describe analyzing the response and the profile using at least one machine-learning model that comprises a gradient-boosting decision tree model, which is trained on aggregated assessment data to generate personalized recommendations tailored to a user's resilience profile, and generating a recommendation for the user based on that analysis, where the recommendation includes personalized guidance. Applicant further asserts that these limitations set out the model class, training corpus, tailoring objective, inference inputs/outputs, and the dependence of recommendation generation on the ML analysis.
Examiner is not persuaded. In particular, claims 2 and 12 merely recite that this is performed. They do not describe it. Furthermore, these limitations are silent regarding meaningful description for setting out the model class, training corpus, tailoring objective, inference inputs/outputs, and the dependence of recommendation generation on the ML analysis.
In pg. 27-29, Applicant repeats arguments regarding the claims reciting a procedure, but directed towards claims 2, 6, 12, and 16.
Examiner is not persuaded. Applicant is directed to the response above which addresses these arguments. It is noted that, contrary to Applicant’s assertions, the specification merely recites that functions are performed, it does not describe the functions themselves.
In pg. 30-33, Applicant asserts that claims 4, 5, 14, and 15 also recite a procedure instead of a desired result.
Examiner is not persuaded. Applicant is directed to the response above which addresses such arguments. It is noted that, contrary to Applicant’s assertions, the specification merely recites that functions are performed, it does not describe the functions themselves.
In pg. 33-36, Applicant asserts that claims 7, 8, 17, and 18 also recite a procedure instead of a desired result.
Examiner is not persuaded. Applicant is directed to the response above which addresses such arguments. It is noted that, contrary to Applicant’s assertions, the specification merely recites that functions are performed, it does not describe the functions themselves.
In pg. 36-38, Applicant asserts that claims 10 and 20 also recite a procedure instead of a desired result.
Examiner is not persuaded. Applicant is directed to the response above which addresses such arguments. It is noted that, contrary to Applicant’s assertions, the specification merely recites that functions are performed, it does not describe the functions themselves.
Applicant's arguments with respect to the rejection of the claims under 35 USC 101 have been fully considered but they are not persuasive. In pg. 38-49, Applicant asserts that claims 1 and 11 have been amended to overcome the rejections and that the amendments are supported by para. 44, 45, 75, 76, 78, 81, 190, 191, and 202 of the specification.
Examiner is not persuaded. Applicant is directed to the rejection above which have been updated to address the amendments to the claims. It is further noted that many of these specification paragraphs are identified in the rejections under 35 USC 112(a) as providing insufficient description. This is noteworthy because it evidences the generic nature of any additional elements and absence of any specific rules.
In pg. 49-50, Applicant recites from amended claims 1 and 11 and asserts that they are similar to the claims in Enfish and Core Wireless.
Examiner is not persuaded. In particular, no aspect of the pending claims are similar to those found patent eligible in Enfish and Core Wireless. As identified in the rejection, the claims are directed to merely collecting information, analyzing the collected information, and outputting the results of the collection and analysis which is wholly encompassed in the judicial exception. In contrast, the claims in both Enfish and Core Wireless were found patent eligible for improving the functioning of the computer itself.
In pg. 50-52, Applicant again recites from amended claims and 11 and asserts that these limitations recite a non-conventional ordered combination that amount to significantly more than the judicial exception.
Examiner is not persuaded. Just as above, Applicant’s arguments merely act to strengthen the findings that the claims are directed to a judicial exception without significantly more.
In pg. 53-54, Applicant asserts that limitations of amended claims 1 and 11 are not well understood, routine, and conventional.
Examiner is not persuaded. This is merely a conclusory statement made without substantive support, and is not persuasive. In contrast, Applicant is directed to the rejection of the claims which has been updated to address the amendments to the claims which identify that claims, at best, merely provide an innovation encompassed within the judicial exception itself, and thus remains directed to patent ineligible subject matter. See SAP America Inc. v. lnvestpic, LLC (890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018).
Applicant's arguments with respect to the rejection of the claims under 35 USC 102 have been fully considered but they are not persuasive. In pg. 54-59, Applicant asserts that the claims have been amended to overcome the rejections.
Examiner is not persuaded. Applicant is directed to the updated rejections under 35 USC 103 which address the amendments to the claims.
Applicant's arguments with respect to the rejection of claims 3 and 13 under 35 USC 103 have been fully considered but they are not persuasive. In pg. 59-61, Applicant asserts that claims 3 and 13 have been amended to overcome the rejections.
Examiner is not persuaded. Applicant is directed to the updated rejections which address the amendments to the claims.
The rejections stand.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL LANE whose telephone number is (303)297-4311. The examiner can normally be reached Monday - Friday 8:00 - 4:30 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL LANE/Examiner, Art Unit 3715
/XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715