DETAILED ACTION
The present application is being examined under the pre-AIA first to invent provisions.
This office action is in response to claims filed on 7/28/2021 in relation to application 17/325,368.
The application is in continuation to 13/747,363 dated 1/22/2013.
Terminal Disclaimer approved on 4/20/2023
Claims 1-8,10-19, 35, 36 are pending. Claims 9,20-34 cancelled.
Claim 36 newly added.
Claim Rejections - 35 USC §101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8,10-19, 35, 36 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claims are directed to a statutory class (i.e., product, method) (STEP 1: YES).
Claims 1, 18 disclose data related to human engagement in learning process, analyzing and displaying of processed data, all describing certain methods of organizing human activity. The functions are what a tutor would manually perform in assessing a student during an exercise of a pre-computer era and thus manages personal behavior or interactions between people., Additionally, the recited steps or functions of mental processes that involves observation and evaluation which a tutor would perform in their mind or by using a help of pen and paper based on the collected captured data. This can be characterized as a mental process of collecting data (e.g., images and video), analyzing that data (e.g., generating learning engagement data), and providing an output based on that analysis (e.g., providing the learner engagement report) and thereby abstract under Electric Power Group. Other learning or steps related to educational process like capturing of events, developing reports, sending alert and providing at least one learner engagement report indicating the overall engagement levels of the plurality of learners are considered as certain methods of organizing human activity in terms of the MPEP citation to teaching people as being an example of same category (STEP 2A, prong 1: YES).
The claims recite additional elements, including a camera configuration for identification and recording of some measurement of learners’ behavior to generate report and instructor contemporaneously with the capture by the optical sensor for generic images or the videos of the at least one learner but are not significant more of an activity to make the claims patent eligible. Added claims 35, 36 identifies an optical sensor with pre-processing capability to remove of filter background data and optical sensor device configuration to processes hand features based on depth data. These machine implementation of a general updated camera functionalities and features on depth data are not providing either improvement or meaningful limitation of abstract idea for a particular technological environment. Further they do not result in an improvement to the functioning of a computer, or to any other technology or technical field. The use of 3D camera and depth data analysis are generally restricted to "capturing and computer processing" of information “displaying of resulting information" related to the analyzed data. The extracting features indicative of activities of the at least one learner from the captured images or videos and generate learner activity data are functions of generic capture devices that have background features in the 3D images. It also recites just providing of a user interface for learning material generally available in art. If some additional features for activities could be considered, they are the recited sensors collecting or capturing in-depth detailed user data amounting to merely an insignificant extra-solution activity to a judicial exception. If you want a case to cite for using a camera to take a picture not rendering patent eligible subject matter aa case like Yu v. Apple could be reviewed. The next step of outputting and sending of alert does not require anything more than displaying and generating of a contemporaneous information related to this processed data, or alternatively just sending to a user interface an output known in art at a high level of generality, thus not practically applying the abstract idea. The claims do not include limitations that integrate the judicial exception in a practical application. (Step 2A, prong 2: No).
Additionally, all generic computer functions are indications of actions towards collecting, retrieving of information that are compared to known standards for one human being as that relates to the evaluation from another human being that may be possible with the use of pen and paper. Learner comparison of learners or of collaborative work without physical interaction are now common using only known electronic devices as also indicated in Para 0002-0007 of the background information of the application. They are also well known, routine and conventional and do not offer meaningful limitations beyond generally linking the abstract idea identified above to a particular technological environment, i.e., computer environment. For example in case of Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, the activities of storing and retrieving of information in a memory of consumer electronic for a field of use purposes are recognized to be use of computer functions. The elements claimed in addition to the abstract idea as far as a capture device, computing device, and embodying abstract idea of computer code that executes in real-time are generic, well-known and conventional, as evidenced by their limited disclosure in the instant specification (Berkheimer findings). There found to be some additional capture device like camera used as capturing a generic field-of-view of use equipment in computer arts. They are well known, routine and conventional and do not offer meaningful limitations beyond generally linking the abstract idea identified above to a particular technological environment. The elements here in the claim recitation do not improves the functioning of a computer itself to overcome the abstract idea rejection (Step 2B: No).
The dependent claims 2-17, 19-20,35,36 of the instant case, when analyzed as a whole are held to be ineligible subject matter and are rejected under 35 U.S.C. § 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea. For instance:
Claims 2-4, 7, 8,11, 19-20 highlights instruction request for learner engagement, indicating the use of video activity data and dimensional values, audio data, input data, attention drawing, corrections and accuracy for contemplated order of smart blocks.
Claims 5,6,8-10, 12-17,35,36 cites analysis based on facial, engagement level, alert generation, comprehension and depth base features of camera equipment etc.
The above is merely an involvement of activities generally categorized as insignificant extra pre and post solution activity as that relates to an abstract idea of monitoring, collection, comparison, rule applications, filtering, outputting etc. They are abstract idea in itself (Step 2A: Prong1 YES). The recitations are not improving the functioning of a computer itself that qualify this to be as significantly more (Step 2A: Prong2 YES). They are based on generic computer processing of comparison, calculations and aggregation of information from components and peripherals such as from input devices, output interface and interactive network elements with use of known sensors as described above. (Step 2B: No). Hence, they are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-15, and 18-19, 35 are rejected under 35 U.S.C. 103(a) as being unpatentable over US Patent No 5944530 to Ho et al. (Ho) in view of US Patent Application Publication Number US 8754924 B1 to Shane and further in view of US 20170039876 A1 to Alyuz Civitci et al. (Alyuz Civitci).
Claims 1 and 18: Ho teaches a computer implemented learning system or monitoring learner engagement by processing of feature data extracted by an optical sensor device, and a learning method monitoring activity of monitoring activity of plurality of learners (Fig.1 element 108, col.1: 55-58 interactive learning system and method for students’ behaviors with study materials) comprising:
(a) at least one capture optical sensor device for monitoring activity of a plurality of learners in an operational field of view, the optical sensor device configured to acquire a plurality of image and video data for the plurality of learners, the plurality of image and video data comprising a background data and a foreground data (Fig.2B elements 170, 180 capturing by keyboard and camera student action generating learning activities; col.8 lines 40-45, col.9 lines 13-40 a digital camera can have an operational field of view and could be configured to have plurality of image and video data comprising of face orientation that may well with background data and/or a foreground data); and
(b) at least one processor configured (¶ 0012 processing computer) to:
(i) monitor the activity of each learner of the plurality of learners the at least one learner during a learning event using the at least one capture device to capture images or videos of at least one learner (col.7: lines 5-18 monitoring behavioral activities and capturing video or an optical sensor images for learner behaviors), extract features indicative of activities of the at least one learner from the captured images or videos (col.8: lines 28-39 wherein one capture data comprises of an optical sensor that captures images or videos of the at least one learner with an focus operation field of view features extracted during a learning event for specific measurements) and generate learner activity data associated with the at least one learner based on the extracted features indicative of activities of the at least one learner (col.8 lines 18-20 indicative of activities polled);
(ii) generate learner engagement data based upon the data indicative of the activities of the at least one learner, the learner engagement data being indicative of how engaged the learner is during the learning event (col.11: 5-7 learner engagement purposely captures data on learner activity on degree of concentration i.e. based on an engagement level in an event and behaviors within the learning environment);
Wherein the learner engagement is based on comprising respective assignment to detection of predetermined facial features posture features, head features, hand features or a combination thereof of the at least of learner in the extracted features (col.9; lines 26-37 facial feature analysis to determine engagement levels compensating for orientation and shapes );
(iii) generate at least one learner engagement report based upon the learner engagement data (col.11: 5-7 printing report indicating engagement or student degree of concentration); and
(iv) provide the at least one learner engagement report to a computing device associated with an instructor (col.11: 6-7 report generation), the learner engagement report having a display indicating the overall engagement levels of the plurality of learners (Fig. 2A elements 154, 180 display for reports for plurality of learners) without explicit learner engagement report provided to the computing device associated with the instructor contemporaneously with the capture by the optical sensor of the images or the videos of the at least one learner
Ho does not explicitly teach a video data capture device generating analytical learner activity data. Shane, in same field of engagement level determination, teaches capturing of video data indicative of learning activities (Col.10:lines 15-20 virtual classroom technology and camera capture of video files and graphical animations are configured for 3-dimensional features as known in the art). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for capturing of video data indicative of learning activities as described by Shane in the computer learning system configuration of combined Ho in order to provide enhanced learner engagement feature analysis.
Ho in combination with Shane does not explicitly teach computing device associated with an instructor, wherein the learner engagement report is provided to the computing device associated with the instructor contemporaneously with the capture by the optical sensor of the images or the videos of the at least one learner. Alyuz Civitci, in same field of learner engagement level determination, teaches capturing of video data indicative of learning activities associated with the instructor contemporaneously with the capture by the optical sensor of the images or the videos of the at least one learner (Para 0013, 0099 processors running modules that receive indications of interactions and physical responses of a learner contemporaneous interactions with the educational program; Para 0030 a labeler could be an instructor equipped with tools to capture optical sensor images collected at learner engagement states in order to update/calibrate the artificial neural network tool for report associated with the learner). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for capturing of video data indicative of learning activities for computing device associated with an instructor, wherein the learner engagement report is provided to the computing device associated with the instructor contemporaneously with the capture by the optical sensor of the images or the videos of the at least one learner, as described by Alyuz Civitci in the instructor driven computer learning system configuration of combined Ho so as to interact in a real-time enhanced learner engagement feature analysis.
Ho in combination with Shane does not explicitly teach computing device associated with a learner engagement based on processing a weighted combination of the learner activity data comprising respective weights assigned to detection of predetermined facial features posture features, head features, hand features or a combination thereof of the at least of learner in the extracted features. Alyuz Civitci, in same field of learner engagement level determination, teaches processing a weighted combination of the learner activity data comprising respective weights assigned to at least of learner in the extracted features (Para 0036 processors running modules that receive indications of interactions and physical responses of a learner contemporaneous interactions with the educational program may utilize a neural network as a set of adaptive and adjustable weights. This can associate a variety of inputs and to associate them with a particular current learning state of a learner). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for an learner engagement based on n processing a weighted combination of the learner activity data comprising respective weights assigned to detection of predetermined facial features posture features, head features, hand features or a combination thereof of the at least of learner in the extracted feature, as described by Alyuz Civitci in the instructor driven computer learning system configuration of combined Ho as modified by Shane so as to obtain an outcome representative of a broader spectrum students.
In re Claims 2 and 19 Ho teaches a learning system of claim 1, the method of claim 17, wherein at least one learner engagement report is provided to the instructor or a supervisor in a real-time such that the instructor is able to determine the current engagement level of the at least one learner from the learner engagement report (col.11: 6-18 level of individual learner concentration analysis studies reveal in report for rewards by instructors).
In re Claim 3. Ho teaches the system of claim 1, wherein [[the at least one capture]] the optical sensor device includes a video capture device and the learner activity data includes video learner activity data (col.11 lines 40-41).
In re Claim 4, 20 Ho teaches the system of claim 1, but does not include at least one capture device configured to include a video capture device identifying three-dimensional video data. Shane, in same field of engagement level determination, teaches capturing of three-dimensional video data and the learner activity data includes three-dimensional video data indicative of activity data (Col.10:lines 15-20 virtual classroom technology and camera capture of video files and graphical animations are configured for 3-dimensional features as known in the art). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for capturing three-dimensional video data and the learner activity data includes three-dimensional video learner activity data as described by Shane in the computer learning system configuration of combined Ho in order to provide enhanced engagement experiences.
In re Claims 5 Ho teaches the system of claim 1, wherein the at least one processor is configured to analyze at least one facial feature (Ho: col.9:18-23 facial orientations) of the at least one learner to determine whether that learner is engaged, and generated the learner engagement data based upon the analysis of at least of one facial feature (col.9; lines 26-37 facial feature analysis to determine engagement levels )
In re Claims 6 Ho teaches the system of claim13, wherein at least one posture of the at least one learner to determine whether that learner is engaged and generated the learner engagement data based upon the analysis of at least one facial posture (Ho: col.10:15-21 following rules and postures for facial and eyelid combination determines engagements).
In re Claim 7 Ho teaches the system of claim 1, but did not include audio capture device. Shane teaches audio capture device (col.12: lines 52-55 audio captured device manipulating audio feeds activating microphone alerting notification) to include into the learner data. indicative of activity. Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made; to have allowed for one capture device to include an audio capture device included into a learner activity monitored data as described by Shane in the computer method and instructor input based learning system of Ho so an enhanced variety of learner activity data could be studied.
In re Claim 8 Ho teaches the system of claim 1 wherein the at least one capture device the optical sensor includes at least one processor configured to capture learner input and the data indicative of learner activity includes learner input activity data (Col.1 65-67 captured learning activity comprises of learner’s input into the computer).
In re Claim 9 Ho teaches the system of claim 1, wherein the at least one learner comprises a plurality of learners, and the processor is configured to generate the at least one learner engagement report based at least in part on learner engagement date from the plurality of learners (col.7:3-5; 42-45 periodic pattern monitoring; sampling i.e. reporting of plurality of learning students engagement or concentration levels monitored to be reported at intervals monitored by variety of ) based at least in part on engagement date from plurality of learners (col.7:32-36 function of time i.e. reporting date invariably could be polled during speed determinations etc. )
In re Claim 10 Ho teaches he system of claim 1, but does not include at least one learner comprises a plurality of learners who are located at different geographical locations. Shane teaches least one learner comprises a plurality of learners who are located at different geographical locations (col.19: lines 3-5 geographical locations for computing devices. Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for least one learner comprises a plurality of learners who are located at different geographical locations as described by Shane in the computer method and instructor input based learning system of Ho so an enhanced variety of learners from distant location could get benefit.
In re Claim 11 Ho in combination teaches the system of claim 1, does not indicate if processor is configured to generate at least one alert to attention to learning event. Shane, however, teaches providing of alert for drawing attentions to the learning event (an input set alerts in the learning system for targeted learners (col.21:lines 33-35 representations of notifications, or alerts, of stimulus that require the user attentions by generation appropriate user interface elements e.g. side bar). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for a sending of alert for an input drawing attentions to the learning as described by Shane into the computer method based learning of Ho, so that generated alert keeps the continuation of engagement within an acceptable range.
In re Claim 12 Ho teaches system of claim 11 without indicating explicit alert type of learner engagement. Shane teaches at least one processor is configured to generate at least one alert on at least in part on learner engagement data or as a query (Para 0002, 0005 alert in part depend on detected stimulus in the environment of learner engagement data). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for engagement data, activity level, comprehension and identification of its risk by an instructor as described by Shane in the learning system and method of Ho in order to set thresholds of effectiveness.
In re Claim 13 Ho teaches system of claim 11 without indicating explicit alert type of not engaged. Shane teaches at least one processor is configured to generate at least one alert targeted to not engage learner (col.21:lines 33-40 triggering to activate or alerting the user to positive inferences determined from the detected stimulus) . Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for engagement data, activity level, comprehension and identification of its risk by an instructor as described by Shane in the learning system and method of Ho in order to set thresholds of effectiveness.
In re Claim 14 Ho teaches system of claim 12 without indicating explicit alert types of risk engagement. Shane teaches at least one processor is configured to generate at least one alert targeted learner identified as being at risk of not being engaged (Shane : col.17:lines 1-5 alerting the user student to negative inferences determined from the detected stimulus based on lesson evaluation or student inattentive such as “I am confused”). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for engagement data, activity level, comprehension and identification of its risk by an instructor as described by Shane in the learning system and method of Ho in order to set thresholds of effectiveness.
In re Claim 15 Ho teaches system of claim 11 alert targeted to the at least one learner selected by the instructor (Ho: col.11: 42-44 instructional authority acting accordingly to a targeted learning activity of material presentation).
In re Claim 35,36 Ho teaches the system of claim 1, wherein the optical sensor device is configured to pre- process the plurality of image and video data but does not explicitly indicate that it is capable of removing the background data upon acquiring the plurality of image and video data based on optical sensor device configuration to processes hand features based on depth data.. Shane teaches at least one processor is configured to generate at least one targeted learner identified for quality and frame-rate video streams for filter and depth features (Shane : col.1:lines 54-67; col.2 lines 1-13 video filtering to remove the additional non-required background data possible with configuration to processes hand features based on depth data). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for optical sensor device configured to pre- process the plurality of image and video data to remove the background data upon acquiring the plurality of image and video data and filter features on depth data as described by Shane in the learning system and method of Ho in order to facilitate video streams.
Claims 16, 17 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No 5944530 to Ho et al. (Ho) in view of US Patent Application Publication Number US20130330705 to Shane et al. (Shane) in view of US 20170039876 A1 to Alyuz Civitci et al. (Alyuz Civitci). and further in view of US20080227079 to Boehme et al. (hereinafter Boehme) .
In re Claims 16 Ho in combination with Shane and Alyuz Civitci teaches system of claim 1 without explicitly identifying or determining learner comprehension based on activity or engagement data of a learning event. Boehme, however, teaches determining a providing of learning materials based on comprehension level and at least on activity and engagement data (Para 0099 question on group to determine engagement; Para 0052 comprehensive approach to input for comprehension; Para 0057 performance activity is a basis of scoring). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for engagement data, activity level comprehension by an instructor as described by Boehme in the combined learning Ho, Shane ’ s system and method in order to set thresholds for effective determination of comprehending learning material.
In re Claims 17 Ho in combination with Shane teaches system of claim 16 without explicitly identifying in providing learning material activity, engagement based upon determined learning comprehension level. Boehme, however, teaches determining providing of learning materials based on comprehension level (Para 0099 question on group activities to determine activity level; 0052 comprehensive approach to input; Para 0057 learning activity for performance is a basis of scoring). Hence, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to have allowed for engagement data, activity level to provide learning material based on student comprehension by an instructor as described by Boehme in the combined learning Ho, Shane ’ s system in order to set thresholds for effective determination of comprehending learning material.
Response to Arguments/Remarks
Applicant's arguments/amendments filed on 2/24/2025 have been considered but found to be not persuasive to the overcome the 35USC§101 and 35USC§103 rejections.
A new rejection is made as necessitated by amendments changing or not c the scope of the claims.
Claim Rejections - 35 U.S.C. § 101
Applicant of pages 8-9 of argument/remarks 9/17/2025 indicated that the presently claimed combination of features performs the technical function of reviewing, weighing, and generating an engagement report in a manner that improves the efficiency of the learning management system. This is a technical improvement of a learning management system and, given the complexity and sheer amount of data to be combined, cannot be practically performed in or by the human mind.
Examiners however find an abstract here in terms of being directed to the mental process of collecting data (e.g., image and video data), analyzing that data (e.g., processing the data, generating engagement data), and providing outputs based on that analysis (e.g., providing the engagement report), as held the CAFC in, e.g., Electric Power Group, University of Florida Research Foundation, and/or Yousician (non-precedential). It could also be characterized as being abstract as a method of organizing human activity in terms of a method of teaching/training human beings (cite MPEP section regarding same). To the extent that the claim, e.g., computing devices, an optical sensor device, and/or capturing/processing data in real-time (“contemporaneously with the capture by the optical sensor of the images or the videos of the at least one learner”), these are all well-known, routine, and conventional devices and/or software techniques as evidence by the limited disclosure in Applicant’s specification (cite spec sections) in regard how to make and/or use these devices and thereby do not constitute “significantly more” than the claimed abstract idea(s). None of these devices are improved qua devices, in other words, in terms of none of them will, e.g., run faster, use less power, and/or be manufactured more cheaply as a result of embodying Applicant’s invention.
In regard to argument regarding “practical application”, it is not part of the Mayo test but instead is a burden placed on Examiners making a 101 rejection under Mayo and not a reason to withdraw the rejection if/unless applicant cite some case law indicating why their claimed invention is patent eligible. Trading Technologies, could be cited in terms of providing a visual display that improves human performance is not patent eligible because it is not a technological improvement (“This invention makes the trader faster and more efficient, not the computer”, emphasis original). 17-2257.Opinion.4-18-2019.pdf
Applicant previously on pages 11-14 of argument/remarks 2/24/2025 indicated that independent claims have been amended to recite additional computer features, This amendments then recites applying a weighted combination of detected features to determine engagement levels, with different weights assigned to various types of learner activity. The weighting feature is based on predefined criteria and provides for structured engagement assessment where specific learner responses can be assigned different weights, allowing the system to account for variations in how different behaviors correlate with engagement. For example, sustained eye contact with the screen may be assigned a higher weight than a brief glance. Similarly, a head tilt combined with a brow movement may indicate confusion and thus be weighted differently than an upright posture with no facial movement. The structured assessment provides more flexibility and enables an accurate assessment of learner engagement, distinguishing it from a simple binary classification of "engaged" or "not engaged." The weighted assessment feature can include assigning numerical values or probabilities to detected features and computing engagement levels based on the values. A human observer, even with pen and paper, would not be able to apply weighted feature processing where different captured features are assigned varying levels of significance in real time. The claimed non-convention tracking of multiple engagement indicators, maintaining consistency in weighting, and dynamically adjusting the assessment as new data is introduced exceeds the capabilities of manual human evaluation, even if assisted by any generic device.
Examiner respectfully traverses and gleaned from the recitation of claim language that the weight assignment are human observation and judgement. Weighted analysis and scoring for engagement in a model is a mathematical relationship and calculation. This looks like a
segmentation process to refine the evaluation based on human opinions. Hence 35USC101 rejection is maintained.
Applicant again on pages 6-9 of previous argument/remarks 1/2/2024 indicated that learners monitored are not in the same physical location as the teacher. This is depending on internet connections. In terms of generic, well-known, and conventional advantage of embodying an abstract idea using networked computers and they’re not making any improvement in regard to that ability. The claimed invention e.g., generally improves networking speed to allow all sorts of monitoring to take place faster. The fact that when network computers are together, things are done at a distance that could not be done without computers. But that’s not an improvement to computing technology per se. Just the same as when “the internet” was claimed in Ultramercial in regard to providing advertising and the CAFC held that it didn’t render patent eligible subject matter. CAFC has held that the speed gains made by embodying an abstract idea in computer code do not render patent eligible subject matter. See, e.g., the CAFC’s holding in Bancorp Services v. Sun Life (2011-1467; 7/26/12), slip. op., pages 20-21.
The claimed invention is been directed to an abstract idea without significantly more. Applicant has added limitations requiring some base features of engagement data in independent claims. Though this may not be routine activity in art, it remains a part of managing personal behavior common in educational art. These are concepts performed in human mind involving observation and judgement.
There is no practical and unconventional improvement in computer-related technology by performing a known computer performance application. Hence the 35 U.S.C. 101 rejection is maintained.
Applicant also submits that claimed feature requires interaction with computing devices such as a capture device for capturing images or video and a processor to extract features from the images and videos. In particular, generating learner engagement data based on extracted features, generating a learner engagement report based on the learner engagement data, and providing the learner engagement report to a computing device associated with an instructor can only be performed within a computing system and thus, are technical solutions to particular technical problems. The Applicant respectfully disagrees and submits that the outcomes are achieved through different processes. Applicant has agreed that the tutor may observe the student, understand their behavior, and assess whether the student is engaged, noting that down on paper or keeping the result in their mind. But examiner respectfully traverses and cites that the recited steps or functions, though requiring different process with varied detections involved, they are part mental observation, evaluation and judgements a tutor has to made all along the class teaching process. Report writing is of using pen paper to document the evaluation observed.
Applicant has further indicated similarly that recitation of claims uses at least one capture device for monitoring activity of at least one learner to capture images or videos of the at least one learner, extract features indicative of activities of the at least one learner from the captured images or videos and generate learner activity data associated with the at least one learner, generate learner engagement data based on the learner activity data, generate at least one learner engagement report based on the learner engagement data, and the provide the at least one learner engagement report to a computer device associated with an instructor. Therefore, the claimed subject matter can only be performed within a computing system and thus, are technological processes. Examiner finds that though there are concrete, palpable, and tangible limitations of the computer use in the aforementioned claimed subject matter. Nevertheless examiner could not find the subject matter, as a whole, to be integrated into a practical application. No improvement on the” functioning of the computer itself’ or “any other technology or technical field” found. The generic capture devices extract images and videos indicative of learner activities to analyze and determine how engaged a learner is during a specified event. A record or report could be generated by known software evaluative steps to categorize a learner behavior. These are unlike a technology-based solution where an improved filtering of content executed on a data to overcome existing problems that could not be generally filtered such as in the ‘Bascom’ case.
Claim Rejections - 35 U.S.C. § 103
Applicant alleged on pages 10-12 of argument/remarks on 9/17/2025 about prior art Alyuz Civitci not appearing to disclose or suggest weighting various learner activities alone or combined. In particular, applicant indicates that the art Alyuz Civitci i mentions body language indicating that the learner is on or off task, but does not suggest which facial features, hand features and/or combination with computer interaction may provide for body language that illustrates a lack of learner engagement. Examiner respectfully traverses and like to highlight the prior art Alyuz Civitci uses ANN neural network technology with adaptive weights to recognize learner engagement. This would be an AI technique able to approximate the capability of nonlinear functions of outputs based on given inputs. Hence the ANN may be able to learn to associate a variety of inputs (to include various weights) to associate them with a particular current learning state of a learner for certain engagement level of a learner.
Previously prior art Ho et al.’s engagement was assessment primarily based on rules that identify behavior patterns, such as "...if for a predetermined period of time, the inputs have been entered outside the window where the study materials reside, the student has lost concentration" (col. 8, lines 24-27) or "if the speed of the student's volitional inputs across a predetermined period of time is significantly lower than the reference speed, the student has lost concentration" (col. 8, lines 4-7). Thus, Ho operates on fixed thresholds for engagement assessment rather than dynamically adjusting the importance of different engagement indicators based on a weighted combination. In contrast, claim 1 provides an adaptive method by assigning respective weights to facial expressions, postural changes, and hand movements, rather than relying on predefined, static behavioral thresholds as in Ho.
Examiner respectfully traverses and indicate that though the prior art combination depends on behavioral threshold for engagement assessment monitoring student’s concentration-sensitive behavioral level. None the less the prior art combination provides an adaptive methods….
Though not directly assigning weights to facial expressions, postural changes and hand movements it remains as the judgement of a tutor or a person with ordinary skills in art to provide a weightage to individual monitored behavioral elements for overall relevant score as deemed necessary for the objective of the concentration level evaluation. 35USC103 rejection is maintained.
The other argument about new claim 35 provide a video data capture device generating analytical learner activity data by modification of background data is a well-known. The learner engagement report is provided to the computing device associated with the instructor contemporaneously with the capture by the optical sensor of the images or the videos of the at least one learner, as claimed.
Applicant asserted on pages 9-11 of previous argument/remarks on 7/4/2024 about prior art Ho et al. or Shane or Alyuz Civitci provide a video data capture device generating analytical learner activity data, nor that the at least one learner engagement report is provided to a computing device associated with an instructor, the learner engagement report having a display indicating the overall engagement levels of a plurality of learners, wherein the learner engagement report is provided to the computing device associated with the instructor contemporaneously with the capture by the optical sensor of the images or the videos of the at least one learner, as claimed.
Examiner would like to respectfully traverse and indicate that it is the combination of art that reads on the subject matter. While individual art teaching an aspect of learner engagement report having a display indicating the overall engagement levels of a plurality of learners, the combined teachings significantly enhancing the learner engagement report display indicating the disclosing the overall engagement levels of a plurality of learners beyond what each individual prior art does (HO: Fig. 2A elements 154, 180 display for reports for plurality of learners;
Alyuz Civitci in same field of learner engagement level determination, teaches capturing of video data indicative of learning in paragraphs 0013, 0030 0099 processors running modules indicates interactions and physical responses of a learner contemporaneous interactions with others who could be an instructor equipped with tools to capture optical sensor images collected at learner engagement level.
Applicant asserted on pages 10-11 of argument/remarks 1/2/2024 about prior art Ho et al. or Shane not providing report being provided “contemporaneously” with the capture is respectfully traversed. It merely an outcome of applying idea by embodying on a computer code making it computer code running fast. Fast enough that often processes occur in real-time or “contemporaneously”. That is a generic, well-known, and conventional advantage of computers and not something that is novel inventio. . In other words, it doesn’t generally improve their computing device to be able to run any computer program faster. And the CAFC has held that the speed gains made by embodying an abstract idea in computer code do not render patent eligible subject matter. See, e.g., the CAFC’s holding in Bancorp Services v. Sun Life (2011-1467; 7/26/12), slip. op., pages 20-21. A new ground of rejection based on additional prior art provided here.
Applicant asserted before that the prior art Ho et al. nor Shane teach that the learner engagement data is based on detection of predetermined facial features, posture features, head features, hand features, or a combination thereof of the at least one learner in the extracted features.. But examiner respectfully traverses and provided a ground of rejection as above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SADARUZ ZAMAN whose telephone number is (571)270-3137. The examiner can normally be reached M-F 9am to 5pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached on (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.Z/Examiner, Art Unit 3715
December 27, 2025
/XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715