Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s submission of a Response
Applicant’s submission of a response was received on 10/22/2025. Presently, claims 1-20 are now pending.
Response to Arguments
Applicant's arguments filed October 22nd, 2025 have been fully considered but they are not persuasive. Applicant’s representative asserts that the amended claims limitations are not met. However, the rejection of claims 1-20 is maintained as presented below. Moreover, in light of the amendments to the claims, new rejection(s) under 35 U.S.C. 103 have been presented, as discussed in detail below.
Applicant’s representative alleges the following:
In regards to the U.S.C. 101 rejection, the problem addressed by the invention is the technical difficulty of synchronizing, integrating, and interpreting multiple asynchronous sensor and camera inputs in real time in order to determine user engagement and emotion and to adapt content delivery accordingly.
In regards to the U.S.C. 101 rejection, the invention changes how a learning management system processes, analyzes, and acts on data.
In regards to the U.S.C. 101 rejection, the ordered combination of elements recited in the claims is neither well understood, routine, nor conventional in the field of remote learning or data analytics.
In regards to the U.S.C. 103 rejection, Dolsma collects data from various sources but does not describe synchronizing those data streams or assigning relative weights to account for the timing, reliability, or relevance of each type of input.
In regards to the U.S.C. 103 rejection, Dolsma relies on uniform performance metrics applied across all students. It does not store or reference a user-specific baseline pattern that defines what constitutes normal engagement for that particular individual. The present claim uses this individualized baseline as a calibration reference, allowing the system to detect subtle, user-specific changes in emotion or attention that a general model could not identify.
In regards to the U.S.C. 103 rejection, Dolsma allows an instructor to adjust content manually but does not disclose a system that autonomously changes the delivery mode between video, slide, and interactive formats based on computed engagement thresholds.
In regards to the U.S.C. 103 rejection, neither reference suggests distributing computation to an edge processor or limiting data transmission to aggregated metrics for latency reduction and privacy protection.
Regarding point (1), the examiner respectfully disagrees.
Applicant’s representative argues that this is not a mental task; it is a data-integration and signal- processing problem that exists only in a computerized environment. However, in response to the argument, the claim is directed towards determining, in real time, user engagement and emotion. When it comes to synchronizing, integrating, and interpreting multiple asynchronous sensor and camera inputs in real time are, these are all core features in the integration of data fusion in Computer vision. This combination of mechanisms are simply using computer vision and machine learning models as a tool to perform the abstract idea.
Furthermore, with respect to mental processes, actual mental performance of the abstract idea is not required, Further, the MPEP § 2106.04(a)(2)(III)(C) states that “claims can recite a mental process even if they are claimed as being performed on a computer” and that “examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and Appellant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.” In the present case, the claim limitations perform steps that are performed on a generic computer and/or computer environment, and merely uses a computer as a tool to perform the concept.
Regarding point (2), the examiner respectfully disagrees.
Applicant’s representative argues that by performing temporal
synchronization and weighting of multimodal data, the system addresses the technical issue of latency and inconsistency among heterogeneous sensor and camera inputs. In response to the argument, applicant is reciting features that are inherently found in data fusion; this is not an improvement on computer vision or the data fusion used in computer vision. When it comes to temporal synchronization and the use of weighting mechanisms when integrating multimodal data, these are all core features in the integration of data fusion in Computer vision.
Regarding point (3), the examiner respectfully disagrees.
Applicant’s representative argues that the features of temporal synchronization and weighted fusion of multimodal data, comparison to a user-specific baseline pattern, automatic adaptation of digital content, and distributed edge- based analytics represent a particular arrangement of hardware and algorithms that collectively improve the technological process of monitoring and responding to user engagement. In response to the argument, these features may or may not be well understood, routine, nor conventional in the field of remote learning or data analytics. However, these features are well understood, routine, and conventional in the field of data fusion in computer vision.
Furthermore, artificial intelligence (such as computer vision) alone does not exempt the claims from subject-matter eligibility scrutiny. Specifically, in RECENTIVE ANALYTICS, INC. v. FOX CORP. Slip opinion page 14 provides the following: “We see no merit to Recentive’s argument that its patents are eligible because they apply machine learning to this new field of use. We have long recognized that “[a]n abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment.” Intell. Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1366 (Fed. Cir. 2015);”.
Second, using artificial intelligence and/or machine learning models to determine engagement and interest levels and predict learning effectiveness is merely claiming the abstract idea itself. Specifically, in RECENTIVE ANALYTICS, INC. v. FOX CORP. Slip opinion pages 16-17 provide the following: “Recentive claims that the inventive concept in its patents is “using machine learning to dynamically generate optimized maps and schedules based on real-time data and update them based on changing conditions.” Appel- lant’s Br. 44. As the district court correctly recognized, see Recentive, 692 F. Supp. 3d at 456, this is no more than claiming the abstract idea itself. Such a position plainly fails to identify anything in the claims that would “‘trans- form’ the claimed abstract idea into a patent-eligible application.” Alice, 573 U.S. at 221 (quoting Mayo, 566U.S. at 71).
In short, we perceive nothing in the claims, whether considered individually or in their ordered combination, that would transform the Machine Learning Training and Network Map patents into something “significantly more” than the abstract idea of generating event schedules and network maps through the application of machine learning. See SAP Am., 898 F.3d at 1169–70; Broadband iTV, 113 F.4th at 1372.”
Finally, the Applicant’s claimed steps of classify[ing], determine[ing], and measure[ing] data to adapt or improve learning material clearly reads on an abstract idea either in the form of “certain methods of organizing human activity,” in terms of managing personal behavior or relationships or interactions between people (including social activities, teaching and following rules or instructions), or reasonably in the form of “mental processes,” in terms of processes that can be performed in the human mind (including an observation, evaluation, judgement or opinion). The Applicant’s limitations simply describe a process of data gathering and manipulation, which is partially analogous to “collecting information, analyzing it, and displaying certain results of the collection analysis” (i.e. Electric Power Group, LLC, v. Alstom, 830 F.3d 1350, 119 U.S.P.Q.2d 1739 (Fed. Cir. 2016)). Specifically, teachers and instructors have been doing this in the analog for decades (i.e. managing personal behavior or relationships or interactions between people (including social activities, teaching and following rules or instructions and/or mental processing by an observation, evaluation, judgement or opinion). As such, the argument is not persuasive.
Regarding point (4), Examiner note that Dolsma does teach or suggest the claimed specific data processing techniques.
Applicant’s representative argues that the first point of distinction lies in the temporal synchronization and weighting of multimodal sensor and camera data. In response to the argument, applicant is reciting features that are inherently found in data fusion; this is not an improvement on computer vision or the data fusion used in computer vision. When it comes to the temporal synchronization and weighting of multimodal sensor and camera data, these are all core features in the integration of data fusion in Computer vision. (See office action below)
Regarding point (5), Examiner note that Dolsma does teach or suggest the claimed specific data processing techniques.
Applicant’s representative argues that the second distinction is the comparison of the fused and weighted data to a baseline pattern of the same user and the use of a dynamically adjusted individualized threshold. In response to the argument, applicant is reciting features that are inherently found in data fusion; this is not an improvement on computer vision or the data fusion used in computer vision. When it comes to comparing the fused and weighted data to a baseline pattern and the use of a dynamically adjusted individualized threshold, these are all core features in the integration of data fusion in Computer vision. (See office action below)
Regarding point (6), Examiner note that Dolsma alone is not relied upon to teach or disclose this limitation.
Applicant’s representative argues that a third critical difference is the automatic modification of digital content delivery in response to engagement measurements. In response to the argument, applicant is reciting features that teachers and educators have been using for decades, such as, switching to a video presentation or to a slide based presentation or to a more interactive approach based on the measurements of student interest, student engagement, and/or learning effectiveness. Furthermore, there is no mention of automatic modification of digital content delivery, there is only mention of how content can be presented in slides, video, or texts in ¶55 of the specification of the present invention. The office action relies on a newly found prior art reference of Chetlur et al. (US 20180018507 A1; hereinafter Chetlur) (necessitated by applicant’s amendment) to teach that content can be presented in video or slideshow format. (See office action below).
Regarding point (7), Examiner note that Dolsma alone is not relied upon to teach or disclose this limitation.
Applicant’s representative argues that the claimed edge-processing architecture changes the system structure itself and yields measurable performance and privacy benefits that are not contemplated by the cited art. In response to the argument, applicant is reciting features that data fusion have been using to achieve both latency reduction and privacy protection. The purpose of an edge processor to process data locally, near its source, to filter, mask, and transform it before sending it to a central system like the cloud. Furthermore by processing data locally, an edge processor can enhance privacy by keeping sensitive data from being sent to the cloud. However, the office action relies on a newly found prior art reference of Porambage et al. (Survey on Multi-Access Edge Computing for Internet of Things Realization; hereinafter Porambage) (necessitated by applicant’s amendment) to teach that an edge processor can achieve both latency reduction and privacy protection. (See office action below).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claims are directed to at least one of abstract idea groupings, according to the 2019 Revised Patent Subject Matter Guidelines (Mathematical Concepts, Mental Processes and/or Certain Methods of Organizing Human Activity). Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception as discussed below.
Step 1 of the 2019 Revised Patent Subject Matter Eligibility Guidance
More specifically, regarding Step 1 of the 2019 Revised Patent Subject Matter Eligibility Guidance, the claims are directed to a system and/or process, which is are statutory categories of invention.
Step 2A-1 of the 2019 Revised Patent Subject Matter Eligibility Guidance
Next, the claims are analyzed to determine whether it is directed to a judicial exception.
Independent claim 1 recites the following, with the abstract ideas highlighted in bold, including an indication as to the abstract idea grouping(s) to which the indicated limitations belong to, according to the 2019 Revised Patent Subject Matter Guidelines. Independent claims 1 and 11, having substantially similar features, were also analyzed and to which the following conclusion is also applicable:
A method, comprising: receiving input related to a user at a learning management system; performing learning management on the input, the learning management including data fusion, computer vision, and machine learning models, wherein: the input includes a combination of: (i) real-time sensor data comprising physiological data; (ii) a user profile including demographic data; and (iii) a learning trace comprising a learning history of the user; the data fusion includes temporally synchronizing sensor data and camera data and assigning a weight to each input modality based on a learned reliability score for that user; the machine learning models, based on input from the computer- vision module, are configured to classify, in real time, emotion of the user as one of a plurality of emotions, and to compare the synchronized and weighted fused data to a stored baseline pattern of the same user in the learning trace, identify deviations exceeding a dynamically adjusted individualized threshold, and, based on those deviations, determine engagement and interest levels and predict learning effectiveness; generating outputs from the computer vision and the machine learning models, wherein the outputs are generated based on the fused data, the outputs including measurements of a user interest, a user engagement, and a learning effectiveness, each computed independently and updated continuously; presenting the measurements in a user interface to a supervisor, the interface including actions and visual indicators that convey attention or warning conditions; automatically changing a mode of digital content delivery between video, slide, and interactive formats when a measurement of engagement or interest remains below the individualized threshold; and executing at least a portion of the analytics locally on an edge processor associated with a user device to reduce transmission latency and protect user data privacy while transmitting only aggregated engagement, interest, and effectiveness metrics to the supervisor interface, and causing a visual and/or audible alert when a measurement indicates a warning condition.
The limitations in claim 1 (as well as claim 11) recites an abstract idea included in the groupings of mental processes, connected to technology only through application thereof using generic computing elements (e.g., computer vision, machine learning models, etc.) and/or insignificant extra-solution activity. According to the 2019 Revised Patent Subject Matter Guidelines:
Mental Processes include concepts performed in the human mind (including an observation, evaluation, judgement, opinion);
Specifically, the instant claims include functions/limitations, as highlighted in the independent claim above, that constitute at least:
Concepts performed in the human mind (e.g., “classifying emotions in real time, determining engagement, determining interest levels, predicting learning effectiveness, measuring user interest, measuring user engagement, and measuring learning effectiveness”), which is an abstract idea included in the grouping of Mental Processes. These limitations are interpreted as at least Mental Processes insomuch as the claim limitations are directed to performing the concepts in the human mind, while only generically connected to interaction with a computer utilizing non-special purpose generic computing elements and/or insignificant extra-solution activity as set forth in the claims.
Regarding dependent claims 2-10 and 12-20:
Each claim is dependent either directly or indirectly from the independent claim identified above and includes all the limitations of said independent claim. Therefore, each dependent claim recites the same abstract idea as identified above. Each of the dependent claim further describes additional aspects of the abstract idea, i.e., additional aspects to the Mental Processes. For example, some dependent claims merely provide additional Mental Processes to be performed and/or additional insignificant extra-solution activity, without anything more significant to establish eligibility under 35 U.S.C. 101.
Step 2A-2 of the 2019 Revised Patent Subject Matter Eligibility Guidance
The second prong of step 2a is the consideration if the claim limitations are directed to a practical application.
Limitations that are indicative of integration into a practical application:
-Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a)
-Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition – see Vanda Memo
-Applying the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b)
-Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c)
-Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo
Limitations that are not indicative of integration into a practical application:
-Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
-Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)
-Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h)
Claims 1-20 clearly do not improve the functioning of a computer or computer vision, as they only incorporate generic computing elements or generic computer vision elements, which do not effect a particular treatment, and do not transform or reduce a particular article to a different state or thing. Similarly, there is no improvement to a technical field. In addition the claims do not apply the judicial exception with, or by use of a particular machine. The claims do not apply or use the judicial exception in a meaningful way. The claimed invention does not suggest improvements to the functioning of a computer or to any other technology or technical field (see MPEP 2106.05 (a)).
This judicial exception is not integrated into a practical application because the claimed invention merely applies the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform the abstract idea (MPEP 2106.05 (f)) and/or generally links the use of the judicial exception to a particular technology or field of use (MPEP 2106.05 (h)). The claimed computer components are recited at a level of generality and are merely invoked as tool to perform the abstract idea. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea.
For the reasons as discussed above, the claim limitations are not integrated to a practical application.
Step 2b of the 2019 Revised Patent Subject Matter Eligibility Guidance
Next, the claims as a whole are analyzed to determine whether any element, or combination of elements, is sufficient to ensure that the claim amounts to significantly more than the exception.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because no element or combination of elements is sufficient to ensure any claim of the present application as a whole amounts to significantly more than one or more judicial exceptions, as described above. For example, the recitations of utilization of “computer vision and machine learning models”, etc. used to apply the abstract idea merely implements the abstract idea at a low level of generality and fail to impose meaningful limitations to impart patent-eligibility. These elements and the mere processing of data using these elements do not set forth significantly more than the abstract idea itself applied on general purpose computing devices. The recited generic elements are a mere means to implement the abstract idea. Thus, they cannot provide the “inventive concept” necessary for patent-eligibility. “[I]f a patent’s recitation of a computer amounts to a mere instruction to ‘implement]’ an abstract idea ‘on ... a computer,’... that addition cannot impart patent eligibility.” Alice, 134 S. Ct. at 2358 (quoting Mayo, 132 S. Ct. at 1301). As such, the significantly more required to overcome the 35 U.S.C. 101 hurdle and transform the claimed subject matter into a patent-eligible abstract idea is lacking. Accordingly, the claims are not patent-eligible.
Further, the claims would require structure that is beyond generic, such as structure that can be interpreted analogous to a general-purpose structure and general-purpose computing elements in that they represent well-understood, routine, conventional elements that do not add significantly more to the claims. See Alice Corp. v. CLS Bank International, 134 S. Ct. at 2358-59. The elements of computer vision and machine learning models are well known conventional devices used to electronically implement learning and education as evidence by Marian Stewart Bartlett and Jacob Whitehill (2010) (Automatic facial expression measurement: Recent applications to basic research in human behavior, learning, and education); hereinafter Stewart. Stewart discloses that conventional facial expression measurements comprises machine learning models and computer vision to include more informative image features and robust motion tracking (¶2). See Berkheimer v. HP Inc., 881 F.3d 1360 (Fed. Cir. 2018).
The dependent claims do not add “significantly more” for at least the same reasons as directed to their respective independent claims, at least based on the position, as discussed above, that each of the dependent claims merely provide additional limitations to further expand the abstract idea of the independent claims, without adding anything which would establish eligibility under 35 U.S.C. 101.
Consequently, consideration of each and every element of each and every claim, both individually and as an ordered combination, leads to the conclusion that the claims are not patent-eligible under 35 USC §101.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1 and 11 recite the following limitation: “automatically changing a mode of digital content delivery between video, slide, and interactive formats” This limitation is not adequately described in the specification as originally filed and forms the basis of the rejection. As such, the limitations are reasonably rejected under a theory of new matter. Therefore, claims 1 and 11 are rejected under 35 U.S.C. § 112(a), as failing to comply with the written description requirement.
Claims 1 and 11 recite the following limitation: “while transmitting only aggregated engagement, interest, and effectiveness metrics to the supervisor interface” This limitation is not adequately described in the specification as originally filed and forms the basis of the rejection. As such, the limitations are reasonably rejected under a theory of new matter. Therefore, claims 1 and 11 are rejected under 35 U.S.C. § 112(a), as failing to comply with the written description requirement.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 3 and 13 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 recites the limitation “physiological data” in line 2. Since the claim language does not use antecedent basis (e.g. “the” or “said”), it is unclear if applicant is referring to the same ones of “physiological data” of claim 1 from which claim 3 depends or a second physiological data. For purposes of examination, it is assumed that “physiological data” refers to the same ones of “physiological data” found in claim 1.
Claim 13 recites the limitation “physiological data” in line 2. Since the claim language does not use antecedent basis (e.g. “the” or “said”), it is unclear if applicant is referring to the same ones of “physiological data” of claim 11 from which claim 13 depends or a second physiological data. For purposes of examination, it is assumed that “physiological data” refers to the same ones of “physiological data” found in claim 11.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-13, 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dolsma et al. (US 2018/0232567Al; hereinafter Dolsma) in view of Chetlur in view of Porambage.
Regarding claims 1 and 11, Dolsma discloses a method, comprising: receiving input related to a user at a learning management system (student gestures, emotions, and movements; 0008); performing learning management on the input, the learning management including data fusion (this system inherently performs data fusion because it integrates multiple sources of data like optical sensor data in ¶26, physiological data in ¶33, and performance data in ¶46), computer vision, and machine learning models (machine vision techniques and logistic regression mathematical model; 0009 and 0014), wherein: the input includes a combination of:(i) real-time sensor data comprising physiological data (real-time diagnosis; 0058); (ii) a user profile (student profile; paragraph 0054 part 310) including demographic data; and (iii) a learning trace comprising a learning history of the user (student’s learning history; paragraph 0054 part 310); the data fusion includes temporally synchronizing sensor data and camera data and assigning a weight to each input modality based on a learned reliability score for that user (the features of synchronizing data and assigning a weight to each input modality is inherent to data fusion as long as there is sensor data and camera data); the machine learning models, based on input from the computer- vision module, are configured to classify, in real time (real-time diagnosis; 0058), emotion of the user as one of a plurality of emotions (0058), and to compare the synchronized and weighted fused to a stored baseline pattern of the same user in the learning trace data (these features are also inherent and a primary application of data fusion, which are often used for tasks like anomaly detection, change detection, performance analysis, and quality assessment), identify deviations exceeding a dynamically adjusted individualized threshold, and, based on those deviations, determine engagement and interest levels and predict learning effectiveness (again, this is quoting what data fusion is able to do, and applying it engagement and interest levels and predict learning effectiveness. Data fusion, particularly within the field of Multimodal Learning Analytics/MMLA, allows for identifying deviations exceeding a dynamically adjusted individualized threshold, and, based on those deviations, determining engagement and interest levels and predicting learning effectiveness. This is done through different specialized networks and machine learning techniques); generating outputs from the computer vision and the machine learning models, wherein the outputs are generated based on the fused data (this system inherently performs data fusion because it integrates multiple sources of data like optical sensor data in ¶26, physiological data in ¶33, and performance data in ¶46), the outputs including measurements of a user interest, a user engagement (gauge skill levels, engagement levels, and interest; 0057 and 0047), and a learning effectiveness (pass/fail with success rate; 0046), each computed independently and updated continuously (with data fusion, this type of modularity is a core principle of data fusion and is often incorporated on the multiple and diverse data sources); presenting the measurements in a user interface to a supervisor (results presented to teacher/trainer; 0056), the interface including actions and visual indicators that convey attention or warning conditions (visual indication as a warning notice; ¶50), and causing a visual and/or audible alert when a measurement indicates a warning condition (visual indication as a warning notice; ¶50). Regarding additional limitations of claim 11, Dolsma discloses a non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations (0061). Dolsma does not explicitly disclose automatically changing a mode of digital content delivery between video, slide, and interactive formats when a measurement of engagement or interest remains below the individualized threshold; and executing at least a portion of the analytics locally on an edge processor associated with a user device to reduce transmission latency and protect user data privacy while transmitting only aggregated engagement, interest, and effectiveness metrics to the supervisor interface.
However, Chetlur uses machine learning techniques to facilitate refining and improving classifications of respective parts of a presentation. Chetlur shows information through a computer display and includes a live or recorded lecture presented by a professor to a group of students, or a slideshow or recorded audible description. Chetlur teaches changing a mode of digital content delivery between video, slide, and interactive formats (a presentation can include a live or recorded lecture, a slideshow, or even recorded audible description; ¶16) when a measurement of engagement or interest remains below the individualized threshold (although Dolsma or Chetlur don’t explicitly mention falling below a dynamically adjected threshold, this is a core principle of adaptive learning systems and often implemented for a more efficient and engaging learning experience).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Dolsma to implement the teaching of Chetlur for the benefit of displaying content in different ways to adapt the teaching style depending on what works best for each student.
Porambage focuses on showing how edge computing, which uses an edge processor, works and the main benefits they have when implemented. Porambage teaches executing at least a portion of the analytics locally on an edge processor associated with a user device to reduce transmission latency and protect user data privacy while transmitting (benefits of using edge computing are lowering the amount of traffic through the infrastructure and reducing latency for applications and services; Page 2962 – Section A – last paragraph) only aggregated engagement, interest, and effectiveness metrics to the supervisor interface- (this is only using the function of an edge processor on the metrics of the present invention).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Dolsma to implement the teaching of Porambage for the benefit of providing low latency, reduced bandwidth usage and costs, enhanced data privacy and security, and improved operational reliability.
Regarding claims 2 and 12, Dolsma discloses wherein the user is a student (0007) and the supervisor is an educator (0007), further comprising presenting feedback and/or actions in the user interface (exercise and tasks; 0056).
Regarding claims 3 and 13, Dolsma discloses wherein receiving input includes receiving data from sensors (0026), the data including physiological data, environment data, and/or user data (physiologic; 0009).
Regarding claims 5 and 15, Dolsma discloses wherein the computer vision based on input from a camera determines a real-time status of the user (real-time diagnosis; 0058), and the determined status data are temporally synchronized and weighted with other sensor inputs during the data fusion process (temporary synchronization and weighing with other sensor inputs is inherent to data fusion).
Regarding claims 6 and 16, Dolsma discloses wherein the computer vision output, the learning trace (student’s learning history; paragraph 0054 part 310), the user profile (student profile; paragraph 0054 part 310), and sensor data are temporally synchronized and weighted in the data fusion module (inherent to data fusion), and the fused data is input to the machine learning models (knowledge trace is a machine learning mode; 0046) configured to generate outputs including measurement metrics (knowledge trace is a machine learning mode; 0046), real- time feedback on the user engagement (0058), predictions on future performance (provide a prediction of student; 0014), real-time feedback on the user interest (0058), and adaptive assessments (0047).
Regarding claims 7 and 17, Dolsma discloses wherein the adaptive assessments are based on one or more of user preferences, the user interest, user background, user knowledge, past learning records, past growth percentiles, skills, reactions, learning styles, aptitude test scores, health status, race, gender, age, and/or income (past performance data and user profile; paragraph 0054 part 310).
Regarding claims 8 and 18, Dolsma discloses further comprising determining the learning effectiveness based on one or more of a student growth percentile, a progress against standards, a number of students that successfully complete training, a pass/fail rate of knowledge assessments; social media posts of students, or combination thereof (pass/fail with success rate; 0046).
Regarding claims 9 and 19, Dolsma discloses further comprising collecting the input using one or more of a position sensor, a presence sensor, a microphone, physiological sensors, a camera motion sensor, a camera, and/or a gyro sensor (paragraph 0054 part 310), wherein data from the input is used to measure an engagement level and an interest level (paragraph 0054 part 309).
Regarding claims 10 and 20, Dolsma discloses further comprising transmitting a teacher feedback notification identifying the detected baseline deviation and recommended instructional adjustments (there is transmission of feedback in ¶52 and the rest is a description of how computer vision works. Computer vision systems analyze visual data such as student engagement levels, body posture, gaze direction, and performance on specific tasks to identify when a student is struggling or deviating from an optimal learning path) in addition to the automatic content delivery change.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Dolsma in view of Chetlur in view of Porambage in view of Movellan et al. (US9008416B2; hereinafter Movellan).
Regarding claims 4 and 14, Dolsma discloses wherein receiving input includes receiving a learning history of the user (paragraph 0054 part 310), wherein the physiological data includes one or more of heart rate, galvanic skin response eye fixation times, number of fixations, eye saccades, blink rates, pupil dilation, voice stress, hand or finger pressure on a mouse, hand position and movement, relative blood flow, muscle tension, heart rate, temperature, somatic activity, galvanic skin response, brain waves, and/or electromyography (tactile pressure exerted on a tactile sensing device and more; 0009). Dolsma does not disclose demographics of the user.
However, Movellan teaches the use of demographics (col 1 line 49-59) with various ethnicities and different ages, and with a range of facial artifacts.
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Dolsma to implement the teaching of Movellan for the benefit of having a large sample size because these machine learning techniques require large and carefully collected datasets of training examples.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSE ANGELES whose telephone number is (703)756-5338. The examiner can normally be reached Mon-Fri 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol can be reached at (571) 272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSE ANGELES/Examiner, Art Unit 3715
/DMITRY SUHOL/Supervisory Patent Examiner, Art Unit 3715