DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
This a response to Applicant’s amendment filed on 15 September 2025, wherein:
Claims 1-6, and 8-20 are amended.
Claim 7 is canceled.
Claims 1-6 and 8-20 are pending.
Specification
The disclosure is objected to because of the following informalities:
The specification remains replete with grammatical and idiomatic errors.
Appropriate correction is required.
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
Claims 1-6 and 8-20 are objected to because of the following informalities:
Claims 1, 19, and 20 each recite “in in” instead of “is in” in the limitation “second moving image that indicates that the user in in the state different from the mindful state”. Appropriate correction is required.
Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus objected to under the same rationale.
Claim Interpretation
The text of those sections of Title 35, U.S. Code 112(f) not included in this action can be found in a prior Office action.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“at least one imaging device configured to: capture a plurality of moving images of the user at a plurality of time periods; capture a first moving image of the plurality of moving images at a first time period in which the user is in a mindful state,… and capture a second moving image of the plurality of moving images at a second time period in which the user is in a state different from the mindful state” in claim 1.
“the at least one imaging device is further configured to capture a third moving image at a third time period in which a state of the user is transitioned between the mindful state and the state different from the mindful state” in claim 8.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The text of those sections of Title 35, U.S. Code 112(b) not included in this action can be found in a prior Office action.
Claims 1-6 and 8-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim limitation “at least one imaging device configured to: capture a plurality of moving images of the user at a plurality of time periods; capture a first moving image of the plurality of moving images at a first time period in which the user is in a mindful state,… and capture a second moving image of the plurality of moving images at a second time period in which the user is in a state different from the mindful state” in claim 1 and “the at least one imaging device is further configured to capture a third moving image at a third time period in which a state of the user is transitioned between the mindful state and the state different from the mindful state” in claim 8 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The disclosure is silent regarding any explicit recitation of any imaging device configured to capture a plurality of moving images. Furthermore, the imaging device itself is not described in any manner. The closest language, and the only mention of “an imaging device” (the disclosure is silent regarding “at least one imaging device”), is found in para. 19 which recites that the “measurement apparatus 12 may include an imaging device. By analyzing an image obtained from the imaging device, for example, a complexion, a facial expression, movement, a voice (breathing sound), and the like of the student A may be analyzed, and an analysis results may be used to detect a mindful state.” Yet, the disclosure is silent regarding any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim 1 recites the limitation "the first moving image, that indicates the user is in the mindful state" in lines 18-19 of the claim. There is insufficient antecedent basis for this limitation in the claim. In particular, the only language preceding this limitation that recites anything resembling “indicating that the user is the mindful state” is “determine whether the user is in the mindful state based on the acquired biological information”. The preceding language regarding the first moving image is merely identifying that it is captured “at a first time period in which the user is in a mindful state”. It is not limited to the moment that the user in the mindful state. Thus, there is no antecedent basis for the first moving image indicating that the user is in the mindful state. Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Claim 1 recites the limitation "the second moving image, that indicates the user in in the state different from the mindful state" in lines 21-23 of the claim. There is insufficient antecedent basis for this limitation in the claim. In particular, the only language preceding this limitation that recites anything resembling “indicating that the user is in the state different from the mindful state” is “determine whether the user is in the mindful state based on the acquired biological information”. The preceding language regarding the second moving image is merely identifying that it is captured “at a second time period in which the user is in a state different from the mindful state”. It is not limited to the moment that the user in the state different from the mindful state. Thus, there is no antecedent basis for the second moving image indicating that the user is in the state different from the mindful state. Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Claim 8 recites the limitation "the third moving image that indicates the transition between the mindful state and the state different from the mindful state" in lines 13-15 of the claim. There is insufficient antecedent basis for this limitation in the claim. In particular, the only language preceding this limitation that recites anything resembling “indicating the transition between the mindful state and the state different from the mindful state” is “determine, based on the biological information, a transition of the state of the user between the mindful state and the state different from the mindful state”. The preceding language regarding the third moving image is merely identifying that it is captured “at a third time period in which a state of the user is transitioned between the mindful state and the state different from the mindful state”. In other words, there is nothing that links the third moving image to the function of indicating that the user is transitioned between the mindful state and the state different from the mindful state. Thus, there is no antecedent basis for the third moving image indicating that the user is transitioned between the mindful state and the state different from the mindful state.
Claims 19 and 20 each recite the limitation " the captured first moving image that indicates the user is in the mindful state" in lines 12-13 of claim 19 and lines 14-15 of claim 20. There is insufficient antecedent basis for this limitation in the claim. In particular, the only language preceding this limitation that recites anything resembling “indicating that the user is the mindful state” is “determine whether the user is in the mindful state based on the acquired biological information”. The preceding language regarding the first moving image is merely identifying that it is captured “at a first time period in which the user is in a mindful state”. In other words, there is nothing that links the first moving image to the function of indicating that the user is in the mindful state. Thus, there is no antecedent basis for the first moving image indicating that the user is in the mindful state.
Claims 19 and 20 each recite the limitation "the captured second moving image that indicates that the user in in the state different from the mindful state" in lines 14-15 of claim 19 and lines 16-17 of claim 20. There is insufficient antecedent basis for this limitation in the claim. In particular, the only language preceding this limitation that recites anything resembling “indicating that the user is in the state different from the mindful state” is “determine whether the user is in the mindful state based on the acquired biological information”. The preceding language regarding the second moving image is merely identifying that it is captured “at a second time period in which the user is in a state different from the mindful state”. In other words, there is nothing that links the second moving image to the function of indicating that the user is in the state different from the mindful state. Thus, there is no antecedent basis for the second moving image indicating that the user is in the state different from the mindful state.
The text of those sections of Title 35, U.S. Code 112(a) not included in this action can be found in a prior Office action.
Claims 1-6 and 8-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 1, the originally filed disclosure is silent regarding “at least one imaging device”. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. In particular, the only mention of an imaging device is found in para. 19 which recites that the “measurement apparatus 12 may include an imaging device” which is only one imaging device. This is distinct from the newly claimed “at least one imaging device” which includes the embodiment of more than one imaging device. Therefore, this is new matter. Such a limitation lacks an adequate written description because an indefinite, unbounded limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See MPEP 2163.03(VI). Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claims 1 and 8, the disclosure fails to provide sufficient written description for “at least one imaging device configured to: capture a plurality of moving images of the user at a plurality of time periods; capture a first moving image of the plurality of moving images at a first time period in which the user is in a mindful state,… and capture a second moving image of the plurality of moving images at a second time period in which the user is in a state different from the mindful state” in claim 1 and “the at least one imaging device is further configured to capture a third moving image at a third time period in which a state of the user is transitioned between the mindful state and the state different from the mindful state” in claim 8. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The disclosure is silent regarding any explicit recitation of any imaging device configured to capture a plurality of moving images. Furthermore, the imaging device itself is not described in any manner. The closest language, and the only mention of “an imaging device” (the disclosure is silent regarding “at least one imaging device” as identified above), is found in para. 19 which recites that the “measurement apparatus 12 may include an imaging device. By analyzing an image obtained from the imaging device, for example, a complexion, a facial expression, movement, a voice (breathing sound), and the like of the student A may be analyzed, and an analysis results may be used to detect a mindful state.” Yet, the disclosure is silent regarding any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality. Such a limitation lacks an adequate written description because an indefinite, unbounded limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See MPEP 2163.03(VI). Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claims 1, 8, 19, and 20, the disclosure fails to provide sufficient written description for “determine whether the user is in the mindful state based on the acquired biological information” in claims 1, 19, and 20 and “determine, based on the biological information, a transition of the state of the user between the mindful state and the state different from the mindful state” in claim 8 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). See, for example, at least para. 38, 40, 41, and 47 of the specification. For instance, para. 40 recites “In a case where the measurement apparatus 12 is a sensor that measures brain waves or respiration, the analysis unit 63 detects a mindful state of the student A using information obtained from the sensor. In this case, the analysis unit 63 may include a learning model obtained through machine learning and be configured to detect a mindful state of the student A using the learning model and the information from the sensor.” A non-descript “learning model obtained through machine learning” is recited in para. 40 to be used, but is never identified nor explained. Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Further regarding claims 1, 19, and 20, the disclosure fails to provide sufficient written description for “generate, based on the biological information, the first moving image, and the second moving image, image data that indicates information associated with each of the detected mindful state of the user and the state different from the mindful state of the user” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. Claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). See, for example at least para. 42 and 48 which recite similar language as the claims without any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality. Dependent claims 2-6 and 8-18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code 101 not included in this action can be found in a prior Office action.
Claims 1-6 and 8-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without including additional elements that are sufficient to amount to significantly more than the judicial exception itself.
Step 1
The claims are directed to a method and products which fall under at least one of the four statutory categories (STEP 1: YES).
Step 2A, Prong 2
Independent claim 1 recites:
An information processing apparatus, comprising:
at least one sensor configured to detect biological information associated with a user;
at least one imaging device configured to:
capture a plurality of moving images of the user at a plurality of time periods;
capture a first moving image of the plurality of moving images at a first time period in which the user is in a mindful state, wherein the plurality of time periods includes the first time period; and
capture a second moving image of the plurality of moving images at a second time period in which the user is in a state different from the mindful state, wherein the plurality of time periods includes the second time period; and
at least one processor configured to:
acquire, from the at least one sensor, the biological information associated with the user;
determine whether the user is in the mindful state based on the acquired biological information;
in a case where the user is in the mindful state, acquire the first moving image, that indicates the user is in the mindful state, from the at least one imaging device; and
in a case where the user is in a state different from the mindful state, acquire the second moving image, that indicates that the user in in the state different from the mindful state, from the at least one imaging device;
generate, based on the biological information, the first moving image, and the second moving image, image data that indicates information associated with each of the mindful state detected mindful state of the user and the state different from the mindful state of the user; and
control, based on the generated image data, a display device to display a screen that includes each of the first moving image and the second moving image.
Independent claim 19 recites:
An information processing method, comprising:
detecting biological information associated with a user;
capturing a plurality of moving images of the user at a plurality of time periods;
capturing a first moving image of the plurality of moving images at a first time period in which the user is in a mindful state;
capturing a second moving image of the plurality of moving images at a second time period in which the user is in a state different from the mindful state, wherein the plurality of time periods includes the first time period and second time period;
determining whether the user is in the mindful state based on the biological information;
in a case where the user is in the mindful state, acquiring the captured first moving image that indicates the user is in the mindful state: and
in a case where the user is in a state different from the mindful state, acquiring the captured second moving image that indicates that the user in in the state different from the mindful state;
generating, based on the biological information, the first moving image, and the second moving image, image data that indicates information associated with each of the detected mindful state of the user and the state different from the mindful state of the user; and
controlling, based on the generated image data, a display device to display a screen that includes each of the first moving image and the second moving image.
Independent claim 20:
A non-transitory computer-readable medium having stored thereon, computer-executable instructions which, when executed by a processor, cause the processor to execute operations, the operations comprising:
detecting biological information associated with a user;
capturing a plurality of moving images of the user at a plurality of time periods;
capturing a first moving image of the plurality of moving images at a first time period in which the user is in a mindful state;
capturing a second moving image of the plurality of moving images at a second time period in which the user is in a state different from the mindful state, wherein the plurality of time periods includes the first time period and the second time period;
determining whether the user is in the mindful state based on the biological information;
in a case where the user is in the mindful state, acquiring the captured first moving image that indicates the user is in the mindful state; and
in a case where the user is in a state different from the mindful state, acquiring the captured second moving image that indicates that the user in in the state different from the mindful state;
generating, based on the biological information, the first moving image, and the second moving image, image data that indicates information associated with each of the detected mindful state of the user and the state different from the mindful state of the user; and
controlling, based on the generated image data, a display device to display a screen that includes each of the first moving image and the second moving image.
All of the foregoing underlined elements identified above amount to the abstract idea grouping of a certain method of organizing human activity because they amount to managing personal behavior or interactions between people (including social activities, teaching, and following rules or instructions) by merely collecting information, analyzing the collected information, and outputting the results of the collection and analysis in the context of a biofeedback process. These elements are also interpreted as a series of steps that could reasonably be performed by mental processes with the aid of pen and paper because the claims, under their broadest reasonable interpretation, cover performance of the limitations in the mind (including observation, evaluation, judgment, opinion) but for the recitation of generic computer components. See MPEP 2106.04(a)(2)(III)(C) - A Claim That Requires a Computer May Still Recite a Mental Process. Even if humans would use a physical aid to help them complete the recited steps, the use of such physical aid does not negate the mental nature of these limitations. It is noted that “a screen” as claimed and disclosed is merely a visual output and not a physical element like a smartphone touchscreen is a physical element.
The dependent claims amount to merely further defining the judicial exception.
Therefore, the claims recite a judicial exception. (STEP 2A, PRONG 1: YES).
Step 2A, Prong 2
This judicial exception is not integrated into a practical application because the independent and dependent claims do not include additional elements that are sufficient to integrate the exception into a practical application under the considerations set forth in MPEP 2106.04(d). The elements of the claims above that are not underlined constitute additional elements.
The following additional elements, both individually and as a whole, merely generally link the judicial exception to a particular technological environment or field of use: an information processing apparatus (claim 1), at least one sensor configured to detect biological information (claim 1), at least one imaging device (claim 1), at least one processor (claim 1), a display device (claims 1, 19, and 20), a non-transitory computer-readable medium (claim 20), and a processor (claim 20). This is evidenced by the manner in which these elements are disclosed in the drawings and the instant specification. For example, Fig. 1-3 merely illustrate these elements as non-descript black boxes, while Fig. 4-25 illustrate that the claimed invention is purely software. Similarly, para. 14-44 merely provide stock descriptions of generic computer hardware and software components in any generic arrangement and illustrate that the claimed invention is merely using a software application to cause a computer to implement the judicial exception. Thus, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. The claims are silent regarding any specific rules with specific characteristics that improve the functionality of the computer system. None of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Again, this is evidenced by the manner in which these elements are disclosed in the drawings and specification as identified above. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the additional elements does not affect this analysis. See MPEP 2106.05(I) for more information on this point, including explanations from judicial decisions including Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 224-26 (2014). Similarly, the references in the claims to displaying a screen and related language amount to merely the presentation of information, not any physical screen and thus does not amount to an additional element. Additionally, the claims do not apply or use a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition nor do they apply or use a judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. For instance, the disclosure identifies that the claimed invention is for presenting a result of detecting a mindful state to a student who is learning mindfulness or to a teacher. See, for example, at least para. 4 of the specification. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (STEP 2A, PRONG 2: YES).
Step 2B
The independent and dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under the considerations set forth in MPEP 2106.05. As identified in Step 2A, Prong 2, above, the claimed system and the process it performs does not require the use of a particular machine, nor does it result in the transformation of an article. Although the claims recite elements, identified above, for performing at least some of the recited functions, these elements are recited at a high level of generality in a conventional arrangement for performing their basic computer functions (i.e., collecting, receiving, processing, outputting data). This is evidenced by the manner in which these elements are disclosed in the instant specification. For example, Fig. 1-3 merely illustrate these elements as non-descript black boxes or stock images, while Fig. 4-25 illustrate the claimed invention is purely software. Similarly, para. 14-44 merely provide stock descriptions of generic computer hardware and software components in any generic arrangement and illustrate that the claimed invention is focused on a software application that merely causes a computer to implement the judicial exception. Thus, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. Thus, the focus of the claimed invention is on the analysis of the collected data, which is itself at best merely an improvement within the abstract idea. See pg. 2-3 in SAP America Inc. v. lnvestpic, LLC (890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018) which proffered “[w]e may assume that the techniques claimed are groundbreaking, innovative, or even brilliant, but that is not enough for eligibility. Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. The claims here are ineligible because their innovation is an innovation in ineligible subject matter. Their subject is nothing but a series of mathematical calculations based on selected information and the presentation of the results of those calculations. Furthermore, the steps are merely recited to be performed by, or using, the elements while the specification makes clear that the computerized system itself is ancillary to the claimed invention as identified above. This further identifies that none of the hardware offer a meaningful limitation beyond, at best, generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Viewed as a whole, these additional claim elements do not provide meaningful limitation to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea of itself (STEP 2B: NO).
Therefore, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
The text of those sections of Title 35, U.S. Code 102 not included in this action can be found in a prior Office action.
Claims 1-6 and 8-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Coleman (US 2015/0351655).
Regarding claims 1, 19, and 20, Coleman teaches an information processing apparatus (claim 1), an information processing method (claim 19), and a non-transitory computer-readable medium having stored thereon, computer-executable instructions which, when executed by a processor, cause the processor to execute operations (claim 20), the operations comprising:
detecting biological information associated with a user (Coleman ‘010, para. 49, “a wearable sensor 102 for collecting biological data, such as a commercially available consumer grade EEG headset with one or more electrodes for collecting brainwaves from the user.” Para. 50, “The one or more internal sensors 106, external sensors 104, or wearable sensors 102 may collect bio-signal or non-bio-signal data other than EEG data. For example, bio-signal data may include heart rate or blood pressure, while non-bio-signal data may include time GPS location, barometric pressure, acceleration forces, ambient light, sound, and other data. Bio-signal and non-bio-signal data can be captured by the internal sensors 106, external sensors 104, or both.” Coleman ‘655, Fig. 1, 1. Acquire - Sensor Data; para. 123, “Sensors for collecting bio-signal data include, for example, electroencephalogram sensors, galvanometer sensors, or electrocardiograph sensors. For example, a wearable sensor for collecting biological data, such as a commercially available consumer grade EEG headset with one or more electrodes for collecting brainwaves from the user.”);
capturing a plurality of moving images of the user at a plurality of time periods (Coleman ‘010, para. 186, “We want to timestamp every sample (e.g. … camera… etc.), each signal can be thought of as a series of samples at a discrete point in time where each sample has a corresponding timestamp.” Coleman ‘655, para. 345-347, “ABCN [Adaptive Brainstate Change Notification] neurofeedback, also called neurotherapy or neurobiofeedback, is a type of biofeedback that uses real time displays of electroencephalography (EEG) or hemoencephalography (HEG) to illustrate brain activity and teach self-regulation. EEG neurofeedback uses sensors that are placed on the scalp to measure brain waves, while HEG neurofeedback uses infrared (IR) sensors or functional magnetic resonance imaging (fMRI) to measure brain blood flow. FNIRS functional near-infrared spectroscopy is a form of neurofeedback (HEG) for the purpose of functional neuroimaging. Using fNIR, brain activity is measured through hemodynamic responses associated with neuron behavior. fNIR is a non-invasive imaging method involving the quantification of chromophore concentration resolved from the measurement of near infrared (NIR) light attenuation, temporal or phasic changes.”);
capturing a first moving image of the plurality of moving images at a first time period in which the user is in a mindful state (Coleman ‘010, para. 371, “output of a digital camera or video camera may be tagged with brain state information”. As identified above, the output of the camera is also timestamped. Coleman ‘655, para. 345-347, “ABCN [Adaptive Brainstate Change Notification] neurofeedback, also called neurotherapy or neurobiofeedback, is a type of biofeedback that uses real time displays of electroencephalography (EEG) or hemoencephalography (HEG) to illustrate brain activity and teach self-regulation. EEG neurofeedback uses sensors that are placed on the scalp to measure brain waves, while HEG neurofeedback uses infrared (IR) sensors or functional magnetic resonance imaging (fMRI) to measure brain blood flow. FNIRS functional near-infrared spectroscopy is a form of neurofeedback (HEG) for the purpose of functional neuroimaging. Using fNIR, brain activity is measured through hemodynamic responses associated with neuron behavior. fNIR is a non-invasive imaging method involving the quantification of chromophore concentration resolved from the measurement of near infrared (NIR) light attenuation, temporal or phasic changes.”);
capturing a second moving image of the plurality of moving images at a second time period in which the user is in a state different from the mindful state, wherein the plurality of time periods includes the first time period and the second time period (Coleman ‘010, para. 371, “output of a digital camera or video camera may be tagged with brain state information”. As identified above, the output of the camera is also timestamped. Coleman ‘655, para. 345-347, “ABCN [Adaptive Brainstate Change Notification] neurofeedback, also called neurotherapy or neurobiofeedback, is a type of biofeedback that uses real time displays of electroencephalography (EEG) or hemoencephalography (HEG) to illustrate brain activity and teach self-regulation. EEG neurofeedback uses sensors that are placed on the scalp to measure brain waves, while HEG neurofeedback uses infrared (IR) sensors or functional magnetic resonance imaging (fMRI) to measure brain blood flow. FNIRS functional near-infrared spectroscopy is a form of neurofeedback (HEG) for the purpose of functional neuroimaging. Using fNIR, brain activity is measured through hemodynamic responses associated with neuron behavior. fNIR is a non-invasive imaging method involving the quantification of chromophore concentration resolved from the measurement of near infrared (NIR) light attenuation, temporal or phasic changes.”);
determining whether the user is in the mindful state based on the biological information (Coleman ‘010, at least para. 250-263 describe this with respect to the Common Spatial Pattern (CSP) Pipeline example. Coleman ‘655, Fig. 1, 2. Analyze - feature extractor, 3. Interpret – Brain state estimator; Fig. 69, In-State Mindful Attention);
in a case where the user is in the mindful state, acquiring the captured first moving image that indicates the user is in the mindful state (Coleman ‘010, para. 371, “output of a digital camera or video camera may be tagged with brain state information”. As identified above, the output of the camera is also timestamped. Coleman ‘655, Fig. 69, In-Brainstate Notification); and
in a case where the user is in a state different from the mindful state, acquiring the captured second moving image that indicates that the user in in the state different from the mindful state (Coleman ‘010, para. 371, “output of a digital camera or video camera may be tagged with brain state information”. As identified above, the output of the camera is also timestamped. Coleman ‘655, Fig. 69, Off-Brainstate Notification);
generating, based on the biological information, the first moving image, and the second moving image, image data that indicates information associated with each of the detected mindful state of the user and the state different from the mindful state of the user (Coleman ‘010, para. 371, “output of a digital camera or video camera may be tagged with brain state information”. As identified above, the output of the camera is also timestamped. Coleman ‘655, Fig. 1, Present – Notification Rules, Notifications or Stimulus; para. 59, “Goal states may include: mindfulness, focused attention, open presence (open monitoring), positive emotions, and visualization, among others. Each of these goal states may be referred to as a meditative brain state, but the present invention is not limited only to meditative brain states.”); and
controlling, based on the generated image data, a display device to display a screen that includes each of the first moving image and the second moving image (Coleman ‘010, Fig. 17, Video Display 508; para. 52, “User effectors 110 are for providing feedback to the user. A "user effector" may be many manner of device or mechanism for having an effect on a user or triggering a user, for example to act on a message. For example, user effector 110 could be a… visual indication on a display or some other way of having an effect on the user.” Para. 53, “User effectors 110 can be used, for example, to provide real-time feedback on characteristics related to the user's current mental state for example. These user effectors may also be used to assist a user in achieving a particular mental state, such as, for example, a meditative state. The user effector 110 may be implemented for example using a graphical user interface designed to enable a user to interact with a meditation training application in an effective manner.” Para. 187, “The video feed is available to user A through the system platform, where video analysis can be done by cluster or grid computer. User Bis pointing the camera at user A, and thus the processed video contains features of facial expression of user A as well as features related to their body language. User A is using the cloud platform to estimate the user's bio state for the purpose of tracking the user's cognitive and affective state” Coleman ‘655, Fig. 1, Present – Notification Rules, Notifications or Stimulus; Fig. 69, In-Brainstate Notification, Off-Brainstate Notification; para. 59, “Goal states may include: mindfulness, focused attention, open presence (open monitoring), positive emotions, and visualization, among others. Each of these goal states may be referred to as a meditative brain state, but the present invention is not limited only to meditative brain states.”).
Regarding claim 2, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein the screen further includes a graph that indicates each of the first time period and the second time period (Coleman ‘010, para. 77, “The mobile app provides feedback to the user through visual screen graphics… feedback. The user receives feedback for each part of the user's session that is computed from the user's EEG signal. These computed results can be either aggregated across parts or across the entire session.” Coleman ‘655, para. 254, “The computer system may track the user's interaction with the practice/meditation and based on this the analyzer may calculate results based on the user's brain data. This may be displayed for example in a graph or results screen that may provide the user feedback or insight on a number of matters relevant to meditation related objectives. For example, the graph may include information that indicates whether the user did well or badly and at what points and this may be brought to the next exercise to improve the results.” Para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 3, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein the screen further includes a display part that presents a percentage of the first time period in which the user is in the mindful state with respect to the plurality of time periods (Coleman ‘655, para. 276, “Examples of data views which may be provided by the system include:… percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 4, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein the screen further includes a display part that presents a percentage of the first time period in a first half and a second half of a specific period of the plurality of time periods (Coleman ‘655, para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart; bar graph comparing chunks of time (beg, mid, end) across one session”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 5, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein the screen further includes a display part that presents a number of transitions of a state of the user between the mindful state and the state that is different from the mindful state (Coleman ‘655, para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 6, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the screen further includes a display part that presents a maximum duration of the first time period in which the user is in the mindful state among the plurality of time periods (Coleman ‘655, para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.),
the plurality of time periods includes a plurality of first time periods and a plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every ½o of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the user in the mindful state in each time period of the plurality of first time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every ½o of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the user is in the state different from the mindful state in each time period of the plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every ½o of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of first time periods includes the first time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”), and
the plurality of second time periods includes the second time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every ½o of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”).
Regarding claim 8, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the at least one imaging device is further configured to capture a third moving image at a third time period in which a state of the user is transitioned between the mindful state and the state different from the mindful state Coleman ‘655, para. 345-347, “ABCN [Adaptive Brainstate Change Notification] neurofeedback, also called neurotherapy or neurobiofeedback, is a type of biofeedback that uses real time displays of electroencephalography (EEG) or hemoencephalography (HEG) to illustrate brain activity and teach self-regulation. EEG neurofeedback uses sensors that are placed on the scalp to measure brain waves, while HEG neurofeedback uses infrared (IR) sensors or functional magnetic resonance imaging (fMRI) to measure brain blood flow. FNIRS functional near-infrared spectroscopy is a form of neurofeedback (HEG) for the purpose of functional neuroimaging. Using fNIR, brain activity is measured through hemodynamic responses associated with neuron behavior. fNIR is a non-invasive imaging method involving the quantification of chromophore concentration resolved from the measurement of near infrared (NIR) light attenuation, temporal or phasic changes.”),
the plurality of time periods includes the third time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of moving images includes the third moving image,
the at least one processor further configured to:
determined, based on the biological information, a transition of the state of the user between the mindful state and the state different from the mindful state (Coleman ‘655, Fig. 1, 2. Analyze - feature extractor, 3. Interpret – Brain state estimator; Fig. 69, In-State Mindful Attention); and
acquire, based on the determination of the transition of the state of the user, the third moving image that indicates the transition between the mindful state and the state different from the mindful state (Coleman ‘655, Fig. 69, ABCN -- In-Brainstate Notification – Off-Brainstate Notification –(Adaptive EEG monitoring)— In-Brainstate Notification. As identified with earlier claim limitations, the monitoring and image acquisition is continuous throughout the session and thus includes transitions.), and
the screen further includes the third moving image (Coleman ‘655, Fig. 1, Present – Notification Rules, Notifications or Stimulus; Fig. 69, In-Brainstate Notification, Off-Brainstate Notification; para. 59, “Goal states may include: mindfulness, focused attention, open presence (open monitoring), positive emotions, and visualization, among others. Each of these goal states may be referred to as a meditative brain state, but the present invention is not limited only to meditative brain states.”).
Regarding claim 9, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the screen further includes a writeable user-input field (Coleman ‘655, para. 255, “As shown in FIGS. 37-39, the computer program may include a journal that allows the user to record insights regarding why meditation related objective were or were not met.”),
the writeable user-input filed includes a user-input that indicates a memo associated with a learning of mindfulness (Coleman ‘655, para. 255, “As shown in FIGS. 37-39, the computer program may include a journal that allows the user to record insights regarding why meditation related objective were or were not met.”).
Regarding claim 10, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein the screen further includes a display part that presents a level of the user in learning of mindfulness (Coleman ‘655, para. 285, “This mode represents a more clear ‘game’ mode, where users have to accomplish certain goals to proceed through a linear series of ‘levels’.” It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 11, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the screen further includes a graph that indicates a percentage of a plurality first time periods with respect to the plurality of time periods in time series (Coleman ‘655, para. 276, “Long-term data modes may also be provided by the system to allow users to see their session history in interesting ways. Examples of these modes may include: scrollable month-by-month views which show individual sessions on an absolute scale (showing where users' calibration was, and where their performance during the session fit in context of that calibration); Insights screen which provides information relevant to the user's life which can be gleaned from analysis of usage/performance data; and calendar-style view which enables users to view the following information by month or week (e.g. number of sessions; total performance; average performance; and time spent practicing).” It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.),
the plurality of time periods includes the plurality of first time periods and a plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the user is in the mindful state in each time period of the plurality of first time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the user is in the state different from the mindful state in each time period of the plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of first time periods includes the first time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”), and
the plurality of second time periods includes the second time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”).
Regarding claim 12, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the displayed screen corresponds to a screen for a teacher who teaches mindfulness to a plurality of users (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the plurality of users includes the user (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the displayed screen presents a percentage of each time period of a plurality of first time periods of the plurality of users with respect to a plurality of periods (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include:… percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.),
the plurality of periods includes the plurality of first time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
each user of the plurality of users is in the mindful state in a respective time period of the plurality of first time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”), and
the plurality of first time periods includes the first time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”).
Regarding claim 13, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the displayed screen corresponds to a screen for a teacher who teaches mindfulness to a plurality of users (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the plurality of users includes the user (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the displayed screen presents a number of transitions of a state of each user of the plurality of users between the mindful state and the state different from the mindful state (Coleman ‘655, Para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 14, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the displayed screen corresponds to a screen for a teacher who teaches mindfulness to a plurality of users (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the plurality of users includes the user (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the displayed screen presents a maximum duration of each first time period of a plurality of first time periods among a plurality of periods (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart; bar graph comparing chunks of time (beg, mid, end) across one session”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.),
the plurality periods includes the plurality of first time periods and a plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
each user of the plurality of users is in the mindful state in a respective first time period of the plurality of first time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
each user of the plurality of users is in the state different from the mindful state in respective second time period of the plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of first time periods includes the first time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”), and
the plurality of second time periods includes the second time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”).
Regarding claim 15, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the displayed screen corresponds to a screen for a teacher who teaches mindfulness to a plurality of users (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the plurality of users includes the user (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the displayed screen further includes a graph of a percentage of each time period of a plurality of first time periods of the plurality of users with respect to a plurality of periods (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include:… percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.),
the plurality of periods includes the plurality of first time periods and a plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
each user of the plurality of users is in the mindful state in a respective first time period of the plurality of first time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
each user of the plurality of users is in the state different from the mindful state in respective second time period of the plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of first time periods includes the first time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of second time periods includes the second time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”), and
the graph is superimposed on the first moving image and the second moving image (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include:… percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 16, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the displayed screen corresponds to a screen for a teacher who teaches mindfulness to a plurality of users (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the plurality of users includes the user (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the displayed screen further includes a graph of a number of transitions of a state of each user of the plurality of users between the mindful state and the state different from the mindful state (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.), and
the graph is superimposed on the first moving image and second moving image (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 17, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the displayed screen corresponds to a screen for a teacher who teaches mindfulness to a plurality of users (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the plurality of users includes the user (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the displayed screen further includes a graph of maximum duration of each time period of a plurality of first time periods among a plurality of periods (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart; bar graph comparing chunks of time (beg, mid, end) across one session”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.),
the plurality of periods includes the plurality of first time periods and a plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
each user of the plurality of users is in the mindful state in a respective first time period of the plurality of first time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
each user of the plurality of users is in the state different from the mindful state in respective second time period of the plurality of second time periods (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of first time periods includes the first time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”),
the plurality of second time periods includes the second time period (Coleman ‘655, para. 61, “During a brain state guidance exercise, the system may compare the user's current brain wave data against these statistical distributions to determine a busy-mind score, such as a score on a continuum from quiet-mind 0 to busy-mind 1, which may be an estimate of macrostate. The busy-mind score may be recalculated at an interval, such as every 1/10 of a second. The continuum may be quantized into a number of segments and the system may vary the UI element or scene for the brain state guidance exercise based on the quantization segment, thereby providing a real-time brain state guidance indication to the user. As the brain state guidance exercise continues, the system may repeatedly update the brain state guidance indication in this way.” Para. 290, “ABCN helps the trainee to engage in more of these meta-cognitive repetitions than they would have if left unsupported, yielding decreased time delay in noticing disengagement from exercise and increased time in selected brainstate (see FIG. 61).”), and
the graph is superimposed on the first moving image and the second moving image (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”; para. 276, “Examples of data views which may be provided by the system include: time vs. score line graph; percentage of time in target/other states pie chart”. It is further noted that the data displayed in the screen amounts to nonfunctional descriptive material that will not distinguish the claimed invention from the prior art in terms of patentability because the data does not functionally relate to the substrate. See MPEP 2111.05.).
Regarding claim 18, Coleman ‘655 teaches the information processing apparatus according to claim 1, wherein
the displayed screen corresponds to a screen for a teacher who teaches mindfulness to a plurality of users (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the plurality of users includes the user (Coleman ‘655, para. 265, “the computer program may enable the user to access a "teacher console" which could be part of the computer program or a separate computer system, resource or computer program, that enables a teach or instructor to view data of their students”),
the displayed screen further includes a writeable user-input field (Coleman ‘655, para. 255, “As shown in FIGS. 37-39, the computer program may include a journal that allows the user to record insights regarding why meditation related objective were or were not met.”), and
the writeable user-input field includes a user-input by the teacher that indicates a memo associated with the teaching for the mindfulness (Coleman ‘655, para. 255, “As shown in FIGS. 37-39, the computer program may include a journal that allows the user to record insights regarding why meditation related objective were or were not met.”).
Response to Arguments
Applicant’s arguments with respect to the specification have been fully considered but they are not persuasive. Applicant asserts that the title and specification have been amended to overcome the objections.
Examiner is not persuaded. The amendment to title obviates that objection. Therefore, this objection is withdrawn. Additionally, while some amendments have addressed several paragraphs that include grammatical and idiomatic errors, the amendments do not address the entire specification which remains replete with grammatical and idiomatic errors.
Applicant’s arguments with respect to interpretation under 35 USC 112(f) have been fully considered but they are not persuasive. Applicant asserts that the interpretation should be withdrawn because the claims have been amended.
Examiner is not persuaded. Applicant is directed to the interpretations under 35 USC 112(f) which have been updated to address the amendments.
Applicant’s arguments with respect to the rejections of the claims under 35 USC 112(b) have been fully considered but they are not persuasive. Applicant asserts that the rejections should be withdrawn because the claims have been amended.
Examiner is not persuaded. Applicant is directed to the rejections which have been updated to address the amendments.
Applicant’s arguments with respect to the rejections of the claims under 35 USC 112(a) have been fully considered but they are not persuasive. Applicant asserts that the rejections should be withdrawn because the claims have been amended.
Examiner is not persuaded. Applicant is directed to the rejections which have been updated to address the amendments.
Applicant’s arguments with respect to the rejections of the claims under 35 USC 101 have been fully considered but they are not persuasive. Applicant asserts that the amended claims cannot be classified as a mental process or a certain method organizing human activity.
Examiner is not persuaded. Applicant is directed to the rejection which has been updated to address the amendments.
In pg. 26-27, Applicant also asserts that the claimed information processing apparatus causes improvement in determination of next learning steps for mindfulness by presenting screen that collectively display different moving images indicating temporal changes in mindful states of the user.
Examiner is not persuaded. Applicant’s assertion only further confirms the nature of the claimed invention as directed to a judicial exception without more. Applicant’s asserted improvement is an asserted improvement within the judicial exception itself which the courts have repeatedly identified as merely an improvement within patent ineligible subject matter (see, for example, at least the precedential decision in SAP America Inc. v. Investpic, LLC (890 F.3d 1016, 126 USPQ2d 1638 (Fed Cir. 2018)) and not any technological improvement.
In pg. 27-28, Applicant asserts that the features of amended claim 1 describe an unconventional activity using conventional elements.
Examiner is not persuaded. It is merely describing collecting information, analyzing the collected information, and outputting the results of the collected information in a traditional feedback process which is fully encompassed by the judicial exception.
Applicant’s arguments with respect to the rejections of the claims under 35 USC 102 have been fully considered but they are not persuasive. Applicant asserts that Coleman does not teach the amended claims.
Examiner is not persuaded. Applicant is directed to the rejections which have been updated to address the amendments.
The rejections stand.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL LANE whose telephone number is (303)297-4311. The examiner can normally be reached Monday - Friday 8:00 - 4:30 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.L./Examiner, Art Unit 3715
/XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715