Prosecution Insights
Last updated: April 19, 2026
Application No. 19/105,381

INFORMATION PROCESSING APPARATUS, SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM

Non-Final OA §102§103
Filed
Feb 21, 2025
Examiner
MARTINEZ QUILES, IVELISSE
Art Unit
2626
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
303 granted / 421 resolved
+10.0% vs TC avg
Strong +27% interview lift
Without
With
+27.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
23 currently pending
Career history
444
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
48.9%
+8.9% vs TC avg
§102
19.3%
-20.7% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 421 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-18 are pending. Claims 1, 3-6, 8-10, 12-16 and 18 are amended. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/21/2025 is being considered by the examiner. Claim Objections Claims 1-18 are objected to because of the following informalities: Claim 1, lines 8-9, recite “generating state information relating to a state of the subject by use of the measurement information and the attribute information”. To clarify the claim language, Examiner suggest “generating state information relating to a state of the subject based on the measurement information and the attribute information”. Claim 13, line 3, recites “a plurality of the models”. To correct antecedent issues, examiner suggest “a plurality of models”. Claim 16, lines 3-4, recite “a brain wave and a vital sign of the subject”. Examiner suggest “the brain wave and the vital sign of the subject”, since the phrase previously appears in claim 1. Claim 17, lines 5-6, recite “generating state information relating to a state of the subject by use of the measurement information and the attribute information”. To clarify the claim language, Examiner suggest “generating state information relating to a state of the subject based on the measurement information and the attribute information”. Claim 18, lines 6-7, recite “generating state information relating to a state of the subject by use of the measurement information and the attribute information”. To clarify the claim language, Examiner suggest “generating state information relating to a state of the subject based on the measurement information and the attribute information”. Claims 2-12 and 14-15 depend directly or indirectly from an objected claim, therefore are also objected. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 6-7, 12 and 16-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tomonaga (US 20200057949 A1). Regarding Claim 1, Tomonaga teaches an information processing apparatus (see abstract, Fig. 2, para. [0018], para. [0041]. Learning material recommendation device) comprising: at least one memory storing instructions (see para. [0015]. A learning material recommendation program stored in a computer readable recording medium); and at least one processor configured to execute the instructions to perform operations (see para. [0015] A learning material recommendation program stored in a computer readable recording medium according to the present invention performs, when executed by a processor) comprising: acquiring measurement information indicating at least one of a brain wave and a vital sign of a subject during learning (see para. [0036], [0038], [0083] The learning material recommendation device estimates the learner's “concentration degree” during learning from the learner's behavioral data during the learning (for example, biological data measured by a wearable terminal, terminal operation log data measured by using a web application), and estimates the learner's “growth” before and after the learning from the “concentration degree”. The concentration degree learning unit 47 may measure the learner's biological data during learning (time-series data on amount of sweating, eye movement, heart rate, blood pressure, electromyogram, etc.) by a wearable sensor or the like, and add the biological data to the feature of the learner), and attribute information of the subject (see para. [0044]-[0046], para. [0055], [0104]-[0108], Fig. 3. The learner data pre-processing unit 41 refers to the learner data storage unit 31 to read an attribute record related to a learner (hereinafter, referred to as “learner attribute record”). The learner data storage unit 31 preserves learner data. The learner data is attribute data related to the learner. Examples of the learner data include demographic data such as name, age, gender, etc., learning history data, social network service (SNS) data related to the learner (self-introduction, learning goal, etc.), and learner's behavioral data during learning (biological data measured by a wearable terminal, terminal operation log data measured by using a web application, etc.), In the example shown in FIG. 3, the learner data includes data on name, age, gender, self-introduction, and learning target corresponding to a learner ID) ; generating state information relating to a state of the subject by use of the measurement information and the attribute information (see para. [0101]-[0113], para. [0115]. The concentration degree estimation unit 48 computes a score of the predicted concentration degree. That is, the concentration degree estimation unit 48 uses the learner feature vector generated in the processing in step S203 and the learning material feature vector generated in the processing in step S205 to calculate a score of the predicted concentration degree when the learner studies the learning material. The score of the predicted concentration degree is a real value in the range of [0, 100]. By way of example, the score of the predicted concentration degree is a numerical value called probability (certainty, reliability) of a support vector machine known as one of the pattern recognition models); and outputting the state information (see para. [0114]-[0115]. The concentration degree estimation unit 48 performs writing of the score of the predicted concentration degree. That is, the concentration degree estimation unit 48 writes the score of the predicted concentration degree computed in the processing in step S206 into the concentration degree storage unit 36 in a data format of <learner ID, learning material ID, score of predicted concentration degree>). Regarding Claim 2, Tomonaga teaches the information processing apparatus according to claim 1. Tomonaga further teaches wherein the state information relates to at least one of a state of the subject at a measurement time point where the measurement information is acquired, and a state of the subject after the measurement time point (see para. [0036]-[0038] and para. [0083]. It is assumed that there is a strong correlation between the “concentration degree” indicating the degree of concentration of the learner on the learning material during learning and the learner's “growth” before and after the learning. The learning material recommendation device estimates the learner's “concentration degree” during learning from the learner's behavioral data during the learning (for example, biological data measured by a wearable terminal, terminal operation log data measured by using a web application), and estimates the learner's “growth” before and after the learning from the “concentration degree”. The learner's “growth” during learning is computed by applying a regression model (with a correlation coefficient assumed to be 1) to the learner's “concentration degree” during learning. Further, the concentration degree learning unit 47 may measure the learner's biological data during learning (time-series data on amount of sweating, eye movement, heart rate, blood pressure, electromyogram, etc.) by a wearable sensor or the like, and add the biological data to the feature of the learner. In this case, the concentration degree learning unit 47 may convert the time-series data into spatial data by using a well-known technique) Regarding Claim 3, Tomonaga teaches the information processing apparatus according to claim 1. Tomonaga further teaches wherein the state information indicates at least one of an emotion, a concentration degree, and an understanding degree of the subject (see para. [0111]-[0113]. The concentration degree estimation unit 48 uses the learner feature vector generated in the processing in step S203 and the learning material feature vector generated in the processing in step S205 to calculate a score of the predicted concentration degree when the learner studies the learning material). Regarding Claim 6, Tomonaga teaches the information processing apparatus according to claim 2. Tomonaga further teaches wherein generating the state information comprises generating the state information relating to a learning state of the subject after the measurement time point by further use of learning information indicating at least a learning content at the measurement time point (see para. [0122]-[0132]. As explained above, it is assumed that there is a strong correlation between the learner's “concentration degree” during learning and the learner's “growth” before and after the learning. In the present embodiment, by way of example, the estimated, the score of the predicted concentration degree, as it is, is used as the predicted growth score (equivalent to the regression model with the correlation coefficient being 1) [0128] In step S305, the learning material recommendation unit 49 judges the learner's learning state from the predicted growth score and the actual comprehension score. That is, the learning material recommendation unit 49 judges the learner's learning state for the <learner ID, learning material ID> pair from the actual comprehension score acquired in the processing in step S303 and the predicted growth score calculated in the processing in step S304. The learning state means the progress or achievement of learning with the comprehension and the growth used as indices. In the present embodiment, the learning material recommendation unit 49 uses a judgment table as shown in FIG. 1, for example, when judging the learning state. Each of the actual comprehension score and the predicted growth score is a real value in the range of [0, 100]. The learning material recommendation unit 49 uses a threshold value set in advance, to binarize (classify into high or low) each of the actual comprehension score and the predicted growth score. In step S306, the learning material recommendation unit 49 recommends a learning material that should be learned next, from a learning process table and the learner's learning state. the learning material recommendation unit 49 recommends a learning material that should be learned next by a learner, from the learner's learning state judged in the processing in step S305 and the learning process table created in advance). Regarding Claim 7, Tomonaga teaches the information processing apparatus according to claim 6. Tomonaga further teaches wherein the state information includes advice information relating to instruction to the subject (see Fig. 11 and para. [0131]-[0133]. The learning material recommendation unit 49 recommends a learning material that should be learned next by a learner, from the learner's learning state judged in the processing in step S305 and the learning process table created in advance. The learning process table is a table that sets, for learning materials to be learned by a learner, which learning material should be selected next in accordance with the learner's learning state). Regarding Claim 12, Tomonaga teaches the information processing apparatus according to claim 1. Tomonaga further teaches wherein generating the state information comprises generating the state information by use of a model generated by machine learning (see para. [0069]-[0070]. In the (A) concentration degree learning step, the concentration degree learning unit 47 models, by machine learning, the relationship among the learner feature vector generated by the learner attribute feature extraction unit 45, the learning material feature vector generated by the learning material attribute feature extraction unit 46, and the actual concentration degree score generated by the concentration degree log pre-processing unit 43, to generate a prediction model. The concentration degree learning unit 47 stores the prediction model in the prediction model storage unit 35. In the (B) concentration degree prediction step, the concentration degree estimation unit 48 applies the prediction model stored in the prediction model storage unit 35 to the learner feature vector generated by the learner attribute feature extraction unit 45 (learner feature vector related to the target learner) and the learning material feature vector generated by the learning material attribute feature extraction unit 46 (material feature vector related to the learning material that the target learner is currently using), to calculate a predicted concentration degree score when the learner studies the learning material. The concentration degree estimation unit 48 stores the score of the predicted concentration degree in the concentration degree storage unit 36). Regarding Claim 16, Tomonaga teaches a system comprising: the information processing apparatus according to claim 1 (see Fig. 2, para. [0040]-[0044]); a measurement apparatus that measures at least one of a brain wave and a vital sign of the subject (see para. [0036], para. [0044], para. [0083]. The learning material recommendation device estimates the learner's “concentration degree” during learning from the learner's behavioral data during the learning (for example, biological data measured by a wearable terminal, terminal operation log data measured by using a web application). The concentration degree learning unit 47 may measure the learner's biological data during learning (time-series data on amount of sweating, eye movement, heart rate, blood pressure, electromyogram, etc.) by a wearable sensor or the like, and add the biological data to the feature of the learner. In this case, the concentration degree learning unit 47 may convert the time-series data into spatial data by using a well-known technique such as fast Fourier transform (FFT)); and a terminal to which the information processing apparatus outputs the state information (see Fig. 2, para. [0040]-[0043], para. [0062], para. [0071]. The recommended learning material output unit 21 outputs a result of calculation by the learning material recommendation unit 49 (which corresponds to the learning material that should be learned next by the learner). For example, the recommended learning material output unit 21 displays information that can specify the learning material that should be learned next (the recommended learning material) on the display unit, or sends the information to another terminal (for example, a user terminal)). Regarding Claim 17, Tomonaga teaches an information processing method (see para. [0001]. A learning material recommendation method) comprising, by one or more computers (see para. [0015] A learning material recommendation program stored in a computer readable recording medium according to the present invention performs, when executed by a processor): acquiring measurement information indicating at least one of a brain wave and a vital sign of a subject during learning (see para. [0036], [0038], [0083] The learning material recommendation device estimates the learner's “concentration degree” during learning from the learner's behavioral data during the learning (for example, biological data measured by a wearable terminal, terminal operation log data measured by using a web application), and estimates the learner's “growth” before and after the learning from the “concentration degree”. The concentration degree learning unit 47 may measure the learner's biological data during learning (time-series data on amount of sweating, eye movement, heart rate, blood pressure, electromyogram, etc.) by a wearable sensor or the like, and add the biological data to the feature of the learner), and attribute information of the subject (see para. [0044]-[0046], para. [0055], [0104]-[0108], Fig. 3. The learner data pre-processing unit 41 refers to the learner data storage unit 31 to read an attribute record related to a learner (hereinafter, referred to as “learner attribute record”). The learner data storage unit 31 preserves learner data. The learner data is attribute data related to the learner. Examples of the learner data include demographic data such as name, age, gender, etc., learning history data, social network service (SNS) data related to the learner (self-introduction, learning goal, etc.), and learner's behavioral data during learning (biological data measured by a wearable terminal, terminal operation log data measured by using a web application, etc.), In the example shown in FIG. 3, the learner data includes data on name, age, gender, self-introduction, and learning target corresponding to a learner ID); generating state information relating to a state of the subject by use of the measurement information and the attribute information (see para. [0101]-[0113], para. [0115]. The concentration degree estimation unit 48 computes a score of the predicted concentration degree. That is, the concentration degree estimation unit 48 uses the learner feature vector generated in the processing in step S203 and the learning material feature vector generated in the processing in step S205 to calculate a score of the predicted concentration degree when the learner studies the learning material. The score of the predicted concentration degree is a real value in the range of [0, 100]. By way of example, the score of the predicted concentration degree is a numerical value called probability (certainty, reliability) of a support vector machine known as one of the pattern recognition models); and outputting the state information (see para. [0114]-[0115]. The concentration degree estimation unit 48 performs writing of the score of the predicted concentration degree. That is, the concentration degree estimation unit 48 writes the score of the predicted concentration degree computed in the processing in step S206 into the concentration degree storage unit 36 in a data format of <learner ID, learning material ID, score of predicted concentration degree>). Regarding Claim 18, Tomonaga teaches a non-transitory computer-readable medium storing a program causing a computer to execute a control method (see para. [0015] A learning material recommendation program stored in a computer readable recording medium according to the present invention performs, when executed by a processor), the control method see para. [0001]. A learning material recommendation method) comprising: acquiring measurement information indicating at least one of a brain wave and a vital sign of a subject during learning (see para. [0036], [0038], [0083] The learning material recommendation device estimates the learner's “concentration degree” during learning from the learner's behavioral data during the learning (for example, biological data measured by a wearable terminal, terminal operation log data measured by using a web application), and estimates the learner's “growth” before and after the learning from the “concentration degree”. The concentration degree learning unit 47 may measure the learner's biological data during learning (time-series data on amount of sweating, eye movement, heart rate, blood pressure, electromyogram, etc.) by a wearable sensor or the like, and add the biological data to the feature of the learner), and attribute information of the subject (see para. [0044]-[0046], para. [0055], [0104]-[0108], Fig. 3. The learner data pre-processing unit 41 refers to the learner data storage unit 31 to read an attribute record related to a learner (hereinafter, referred to as “learner attribute record”). The learner data storage unit 31 preserves learner data. The learner data is attribute data related to the learner. Examples of the learner data include demographic data such as name, age, gender, etc., learning history data, social network service (SNS) data related to the learner (self-introduction, learning goal, etc.), and learner's behavioral data during learning (biological data measured by a wearable terminal, terminal operation log data measured by using a web application, etc.), In the example shown in FIG. 3, the learner data includes data on name, age, gender, self-introduction, and learning target corresponding to a learner ID); generating state information relating to a state of the subject by use of the measurement information and the attribute information (see para. [0101]-[0113], para. [0115]. The concentration degree estimation unit 48 computes a score of the predicted concentration degree. That is, the concentration degree estimation unit 48 uses the learner feature vector generated in the processing in step S203 and the learning material feature vector generated in the processing in step S205 to calculate a score of the predicted concentration degree when the learner studies the learning material. The score of the predicted concentration degree is a real value in the range of [0, 100]. By way of example, the score of the predicted concentration degree is a numerical value called probability (certainty, reliability) of a support vector machine known as one of the pattern recognition models); and outputting the state information (see para. [0114]-[0115]. The concentration degree estimation unit 48 performs writing of the score of the predicted concentration degree. That is, the concentration degree estimation unit 48 writes the score of the predicted concentration degree computed in the processing in step S206 into the concentration degree storage unit 36 in a data format of <learner ID, learning material ID, score of predicted concentration degree>). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Tomonaga (US 20200057949 A1). Regarding Claim 13, Tomonaga teaches the information processing apparatus according to claim 12. Tomonaga further teaches in another embodiment wherein the attribute information indicates one or more attributes, a plurality of the models each associated with one attribute or a combination of two or more attributes are held in an analysis storage unit, and the operations further comprise selecting, based on the attribute information of the subject, the model to be used from among the plurality of models, and generating the state information comprises using uses the selected model for generation of the state information (see para. [0135]-[0143]. The learning material recommendation device 10 shown in FIG. 12 differs from the learning material recommendation device 1 shown in FIG. 1 in that the concentration degree log input unit 13, the concentration degree log storage unit 33, the concentration degree log pre-processing unit 43, and the concentration degree learning unit 47 are excluded, and a prediction model receiving unit 51 is added. The prediction model receiving unit 51 receives a prediction model via the Internet. Although the prediction model may be generated by the processing illustrated in FIG. 8, it may be generated by other processing. In the concentration degree prediction step, the prediction model receiving unit 51 firstly receives a prediction model, and stores the prediction model in the prediction model storage unit 35. The concentration degree estimation unit 48 then reads the prediction model from the prediction model storage unit 35 (step S401). In the present embodiment, utilizing the prediction model provided by the prediction model distribution server or the like results in a simplified configuration of the learning material recommendation device 10). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga with Tomonaga’s prediction model teachings in another embodiment, since utilizing the prediction model provided by the prediction model distribution server or the like results in a simplified configuration of the learning material recommendation device 10 (Tomonaga para. [0143]). Moreover, combining two embodiments disclosed adjacent to each other in a prior art patent does not require a leap of inventiveness.” Boston Scientific Scimed, Inc. v. Cordis Corp., 554 F.3d 982, 991 (Fed. Cir. 2009). Claims 4-5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Tomonaga (US 20200057949 A1) in view of Tsuruta (JP 2011007963A, see attached English translation, hereinafter referenced as Tsuruta). Regarding Claim 4, Tomonaga teaches the information processing apparatus according to claim 1. Tomonaga does not explicitly disclose wherein outputting the state information comprises outputting the state information in real time to a terminal used by an instructor who instructs the subject at a measurement time point where the measurement information is acquired. However, Tsuruta teaches wherein outputting the state information comprises outputting the state information in real time to a terminal used by an instructor who instructs the subject at a measurement time point where the measurement information is acquired (see Fig. 1, terminal 20A, para. [0021]-[0022], para. [0027], para. [0051], para. [0054]. The user terminal 20A is assumed to be a terminal used by an instructor, and may be referred to as an "instructor's terminal" hereinafter. "Leader" includes speakers, explainers, teachers, etc. The situation information estimated at the learner's terminal is distributed to the instructor's terminal via the distribution server 10 and is presented to the instructor. This allows the instructor to grasp the learner's physical and mental state in real time and proceed with the lecture according to that physical and mental state, making it possible to prevent misunderstandings due to learners failing to hear or overlook something. The present invention can provide a distance learning system and a distance learning method that allow an instructor to grasp the physical and mental conditions of a learner in real time. This allows the instructor to conduct the lecture in accordance with the learner's physical and mental condition, making it possible to prevent misunderstandings due to learners failing to hear or see something). Tomonaga and Tsuruta are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga with Tsuruta’s teachings of outputting the state information in real time to a terminal used by an instructor, since it would have provided a distance learning system that allows an instructor to grasp the physical and mental conditions of a learner in real time. This allows the instructor to conduct the lecture in accordance with the learner's physical and mental condition, making it possible to prevent misunderstandings due to learners failing to hear or see something (Tsuruta, para. [0054]). Regarding Claim 5, Tomonaga and Tsuruta teach the information processing apparatus according to claim 4. Tsuruta further teaches wherein the operations further comprise outputting notification information to the terminal in a case where the state information satisfies a previously determined condition (see para. [0051]-[0054]. The client 20 includes a situation information monitoring/meter creation/display unit, a comprehension meter creation/display module, a drowsiness/boredom meter creation/display module, and a concentration meter creation/display module. The situation information monitoring and meter creation and display unit monitors the various situation information that is input, and if there is a change in the situation information, it notifies each meter creation and display module of the change. The comprehension meter creation and display module creates and displays a meter that indicates the user's learning comprehension, the drowsiness/boredom meter creation and display module creates and displays a meter that indicates the user's drowsiness/boredom, and the concentration meter creation and display module creates and displays a meter that indicates the user's concentration. According to this configuration, the user's situation is displayed on the meter, so that the user's situation can be easily grasped. the client 20 includes a drowsiness variable/alarm monitoring module, an alarm sound module, an alarm text display module, and the like. The drowsiness variable/alert monitoring module monitors the drowsiness variable, and activates the alarm sound module or the alarm text display module when the value of the drowsiness variable reaches a predetermined value. The warning sound module outputs a warning sound, and the warning text display module displays a warning text. According to this configuration, a warning can be given to the user at an appropriate time, thereby improving the learning effect. As described above, the present invention can provide a distance learning system and a distance learning method that allow an instructor to grasp the physical and mental conditions of a learner in real time). Tomonaga and Tsuruta are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga with Tsuruta’s teachings, since it would have provided a distance learning system that allows an instructor to grasp the physical and mental conditions of a learner in real time. This allows the instructor to conduct the lecture in accordance with the learner's physical and mental condition, making it possible to prevent misunderstandings due to learners failing to hear or see something (Tsuruta, para. [0054]). Claim 8, Tomonaga teaches the information processing apparatus according to claim 6. Tomonaga further teaches wherein generating the state information comprises generating the state information ((see para. [0101]-[0113], para. [0115]. The concentration degree estimation unit 48 computes a score of the predicted concentration degree. That is, the concentration degree estimation unit 48 uses the learner feature vector generated in the processing in step S203 and the learning material feature vector generated in the processing in step S205 to calculate a score of the predicted concentration degree when the learner studies the learning material. The score of the predicted concentration degree is a real value in the range of [0, 100]. By way of example, the score of the predicted concentration degree is a numerical value called probability (certainty, reliability) of a support vector machine known as one of the pattern recognition models). Tomonaga does not explicitly disclose generating the state information for each curriculum or unit. However, Tsuruta teaches generating the state information for each curriculum or unit (see Fig. 1, 20B User terminal (terminal for learner) 20C User terminal (terminal for learner), para. [0021]-[0022] para. [0051]-[0054]. the user terminals 20B and 20C are assumed to be terminals used by learners, and may be referred to as "learner terminals". The situation information monitoring and meter creation and display unit monitors the various situation information that is input, and if there is a change in the situation information, it notifies each meter creation and display module of the change. The comprehension meter creation and display module creates and displays a meter that indicates the user's learning comprehension, the drowsiness/boredom meter creation and display module creates and displays a meter that indicates the user's drowsiness/boredom, and the concentration meter creation and display module creates and displays a meter that indicates the user's concentration. As described above, the present invention can provide a distance learning system and a distance learning method that allow an instructor to grasp the physical and mental conditions of a learner in real time. Furthermore, being able to grasp the status of multiple listeners and participants in explanations and lectures in real time will lead to improved efficiency not only in distance learning, but also in remote collaborative development, telecommuting, which is encouraged by government research, and offshore development overseas). Tomonaga and Tsuruta are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga with Tsuruta’s teachings, since it would have provided a distance learning system that is able to grasp the status of multiple listeners and participants in explanations and lectures in real time will lead to improved efficiency not only in distance learning, but also in remote collaborative development, telecommuting, which is encouraged by government research, and offshore development overseas (see para. [0054]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Tomonaga (US 20200057949 A1) in view of Tsuruta (JP 2011007963A, see attached English translation, hereinafter referenced as Tsuruta), further in view of Carr et al. (US 20180075772 A1, hereinafter referenced as Carr). Regarding Claim 9, Tomonaga and Tsuruta teach the information processing apparatus according to claim 8. Tomonaga and Tsuruta do not explicitly teach wherein outputting the state information comprises outputting the state information to a terminal, and the terminal outputs the state information in a comparable state between a plurality of curricula or between a plurality of units. However, Carr teaches wherein outputting the state information comprises outputting the state information to a terminal, and the terminal outputs the state information in a comparable state between a plurality of curricula or between a plurality of units (see Fig. 1, teacher device 108, para. [0021]-[0022], para. [0028], para. [0032], para. [0065].The server device can generate a learning assessment report that evaluates the learning performance of students for one or more learning experiences. In addition, the teacher can compare the learning performances of students relative to one another to determine collectively what material the students find particularly difficult, interesting, easy, etc., and adjust their curriculum or teaching techniques accordingly. For example, the teacher can go back and look at the collective learning performance feedback for a class as a whole to learn how the students react to specific tasks (e.g., lecture, individual work, small group work, exams, etc.) and adjust the teacher lesson plan accordingly to try and make the time with the students more effective. The system 100 and/or the components of the system 100 can be employed to use hardware and/or software to solve problems that are highly technical in nature, that are not abstract and that cannot be performed as a set of mental acts by a human. For example, system 100 and/or the components of the system 100 can be employed to use hardware and/or software to perform operations including monitoring neurofeedback (e.g., haeyomodynamic, metabolic and brainwave data) generated by a student during a learning experience and correlating patterns in the neurofeedback information with qualitative and/or quantitative mental performance measures with respect to one or more defined cognitive function areas. The mental performance measures can be compared to various thresholds to automatically identify if a student is exhibiting low learning performance, thereby facilitating improved processing time for determining if the user is having difficulty learning during a learning experience. In addition, system 100 and/or the components of system 100 can automatically generate and send notifications to the student, the student's teacher, (or another entity responsible for facilitating learning by the student), indicating the student's low learning performance during and/or after the learning experience so that the student and/or the student's teacher can react appropriately. The learning performance server device 106 can further send the notification to a device (e.g., teacher device 108) associated with another entity responsible for facilitating the learning by the student, such as the student's teacher (e.g., teacher 110 or another suitable entity). Accordingly, the student's teacher (e.g., teacher 110) can quickly and effectively address the student learning needs before the student falls further behind. For example, after or while a teacher 110 is teaching conducting a lecture to a plurality of students (e.g., users 104.sub.1-N), the teacher 110 can receive a notification at a device employed by the teacher (e.g., teacher device 108) that identifies one of the students and indicates the student learning performance is low). Tomonaga, Tsuruta and Carr are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga and Tsuruta with Carr’s teachings, since it would have facilitate improving the user's learning of the content (para. [0006]). Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Tomonaga (US 20200057949 A1) in view of Kang et al. (US 20230034709 A1, hereinafter referenced as Kang). Regarding Claim 10, Tomonaga teaches the information processing apparatus according to claim 1. Tomonaga does not explicitly disclose wherein the attribute information includes an attribute relating to a factor needing support. However, Kng teaches wherein the attribute information includes an attribute relating to a factor needing support (see para. [0032], para. [0035]. The experienced difficulty determination unit 108 determines the learner's experienced difficulty with respect to the learning content using the micro-vibration information and/or the fine skin color change amount information. Specifically, the experienced difficulty determination unit 108 may determine the learner's experienced difficulty based on emotional information and/or physical state information. The learner information generator 106 may calculate the learner's emotional information and/or the learner's physical state information using the micro-vibration information and/or the fine skin color change amount. The emotional information may be, for example, information about at least one of emotion classifications such as joy, surprise, sadness, anger, interest, stress, fear, boredom, pleasantness, displeasure, nervousness, frustration, and neutral. Such emotional information may be information of a main emotion classification to which the learner belongs, information of a plurality of emotion classifications to which the learner belongs, or weight information of a plurality of emotion classifications to which the learner belongs, but is not limited thereto). Tomonaga and Kang are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga with Kang’s teachings, since it would have aided to providing learning content customized for a learner in consideration of the learner's experienced difficulty (para. [0014]). Regarding Claim 11, Tomonaga and Kang teach the information processing apparatus according to claim 10. Kang further teaches wherein the attribute information includes at least one of information relating to difficulty of the subject and information relating to a language of the subject (see para. [0035] The experienced difficulty determination unit 108 determines the learner's experienced difficulty with respect to the learning content using the micro-vibration information and/or the fine skin color change amount information. Specifically, the experienced difficulty determination unit 108 may determine the learner's experienced difficulty based on emotional information and/or physical state information calculated using micro-vibration information and/or fine skin color change information. The experienced difficulty determination unit 108 may determine the experienced difficulty using an algorithm for determining the experienced difficulty using emotional information and/or physical state information as parameters or a pre-learned experienced difficulty determination model). Tomonaga and Kang are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga with Kang’s teachings, since it would have aided to providing learning content customized for a learner in consideration of the learner's experienced difficulty (para. [0014]). Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Tomonaga (US 20200057949 A1) in view of Shim et al. (US 20150125842 A1, hereinafter referenced as Shim). Regarding Claim 14, Tomonaga teaches the information processing apparatus according to claim 1. Tomonaga does not explicitly disclose wherein acquiring the measurement information comprises acquiring the measurement information by processing an image of the subject during learning. However, Shim teaches wherein acquiring the measurement information comprises acquiring the measurement information by processing an image of the subject during learning (see para. [0025], para. [0051], para. [0058], para. [0095]. The multimedia apparatus 100 may detect a user state using various sensors. The user state may be an understanding or concentration of the user. In particular, the multimedia apparatus 100 may detect concentration of the user according to a blink rate of the user and an eyesight direction of the user, which may be photographed by a camera, and motion of a user face detected by a motion sensor). Tomonaga and Shim are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga Shim’s teachings. Since it would have aided in determining state of understanding or concentration of a learner and thus adding in improving learning efficiency. Regarding Claim 15, Tomonaga teaches the information processing apparatus according to claim 1. Tomonaga does not explicitly disclose wherein generating the state information comprises generating the state information by further use of an image of the subject during learning. However, Shim teaches wherein generating the state information comprises generating the state information by further use of an image of the subject during learning (see para. [0025], para. [0051], para. [0058], para. [0095]. The multimedia apparatus 100 may detect a user state using various sensors. The user state may be an understanding or concentration of the user. In particular, the multimedia apparatus 100 may detect concentration of the user according to a blink rate of the user and an eyesight direction of the user, which may be photographed by a camera, and motion of a user face detected by a motion sensor. In addition, the multimedia apparatus 100 may output questions about the education content that is currently reproduced at predetermined intervals and determine understanding of the user according to user voice input through a microphone in response to the questions). Tomonaga and Shim are related to leaning systems and methods, thus one of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized the obviousness of modifying the information processing apparatus disclosed by Tomonaga Shim’s teachings. Since it would have aided in determining state of understanding or concentration of a learner and thus adding in improving learning efficiency. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVELISSE MARTINEZ QUILES whose telephone number is (571)270-7618. The examiner can normally be reached Monday thru Friday; 1:00 PM to 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae can be reached at 571-272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IM/Examiner, Art Unit 2626 /TEMESGHEN GHEBRETINSAE/Supervisory Patent Examiner, Art Unit 2626 3/30/26
Read full office action

Prosecution Timeline

Feb 21, 2025
Application Filed
Mar 29, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596451
TOUCH DETECTION MODULE AND DISPLAY DEVICE INCLUDING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12596473
Touch Screen and Image Display Method Thereof
2y 5m to grant Granted Apr 07, 2026
Patent 12586524
PIXEL CIRCUIT AND DISPLAY PANEL
2y 5m to grant Granted Mar 24, 2026
Patent 12547286
TOUCH DISPLAY PANEL, METHOD FOR MANUFACTURING THE SAME, AND DISPLAY APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12535896
WRITING DEVICE, INTELLIGENT WRITING BOARD AND METHOD FOR SETTING COLOR OF ELECTRONIC HANDWRITING
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+27.0%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 421 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month