DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed October 13, 2025 has been entered. Claims 25-38 remain pending in the application as claim 39 is withdrawn as discussed below. Claims 1-24 are noted as cancelled, and claims 25-39 are noted as newly added. Applicant’s amendments to the drawings and claims have overcome all previous objections and 112(b) rejections set forth in the Non-Final Office Action mailed July 29, 2025 and all objections and rejections therein have been withdrawn. However, new objections and rejections are noted below.
Drawings
The replacement drawings were received on October 13, 2025. These drawings are acceptable.
Election/Restrictions
Claim 39 is withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on June 18, 2025. In the reply filed on June 18, 2025, Applicant elected without traverse Invention I, claims 1-22, which correspond to current claims 25-38. Claims 23-24 were drawn to the nonelected invention. Per Applicant’s Remarks on page 12, claim 39 is a new independent claim including the limitations of the previously nonelected claims 23 and 24. Therefore, claim 39 is withdrawn from consideration.
Claim Objections
Due to Applicant’s significant amendments and lack of tracked changes, Applicant’s cooperation is requested in correcting any errors which applicant may become aware in the claims. Examiner has noted objections below.
Claims 25, 28, 31, 32, 34, and 36 are objected to because of the following informalities:
In claim 25, line 10, “collected multimodal data” should read “the collected multimodal data”.
In claim 25, line 12, “a multimodal data fusion” should read “a multimodal data fusion model”.
In claim 25, line 19, “said the” should read “said” or “the”.
In claim 28, line 2, “device begin” should read “device being”.
In claim 31, line 18, “consistency each” should read “consistency of each”.
In claim 31, line 20, “prompt sing” should read “prompt using”.
In claim 32, line 3, “collected tin” should read “collected in”.
In claim 34, line 3, “at least o” should read “at least one”.
In claim 36, line 1, “providing at least one stimulus” should read “providing the at least one stimulus”.
In claim 36, line 3, “providing at least one stimulus” should read “providing the at least one stimulus”.
In claim 36, line 6, “providing at least one stimulus” should read “providing the at least one stimulus”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 25-30 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 25 recites the following limitations “at one Hz sampling rate” in line 21, “measure blood volume pulse at 64Hz, galvanic skin response at 4 Hz, skin temperature at 4 Hz” in line 23, “inter-beat intervals derived from BVP signals” in line 24, “a plurality of motion sensors that capture three-dimensional hand movement data at 32Hz with a total acceleration calculation” in lines 25-26, “wherein the system is configured for Applied Behavioral Analysis (ABA) therapy sensors” in lines 26-27, and “to output behavioral classification targets with real-time feedback optimization” in lines 30-31. The limitations lack sufficient support in the specification and amount to new matter. Specifically, with regard to the various sampling rates, the specification never discusses or mentions sampling rates at all and therefore does not support the specific sampling rates as claimed. With regard to the “inter-beat intervals” and “capture three-dimensional hand movement data”, while the original disclosure supports generic wearable sensors for measuring heart rate/blood volume pulse and motion, the specification does not provide the necessary hardware or algorithm for performing these steps. Per MPEP 2161.01, computer implemented inventions must sufficiently disclose the algorithm and hardware for performing the claimed functions. The instant application fails to provide specific sensors, merely reciting IoT sensors and wearable sensors, and does not recite any algorithm for determining inter-beat intervals of capturing hand movement data. For instance, with regard to the hand movement, is the movement based on captured images/video from a camera or image sensor or accelerometers worn by a user and calculated by the computing device? What is a “total acceleration calculation”, how is it calculated, and how is the acceleration captured? Therefore, the specification fails to sufficiently disclose both the hardware and algorithm for performing the claimed functions. Finally, “Applied Behavioral Analysis (ABA) therapy sensors” and “behavioral classification targets” are not defined in the specification and are not terms of art specifically defines. Therefore, one of ordinary skill in the art would not understand what the claimed sensors or targets entail as the specification fails to sufficiently describe or support the limitations in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor had possession of the claimed invention. Therefore, as discussed above, claim 25 is rejected under 35 U.S.C. 112(a).
Claims 26-30 are rejected by virtue of their dependency from claim 25.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 25-30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 25 recites the limitations "Applied Behavioral Analysis (ABA) therapy sensors” in line 27 and “behavioral classification targets” in line 30. The limitations are not known terms of art or defined in the specification in such a way that one of ordinary skill in the art could determine the subject matter which the inventor or a joint inventor regards as the invention. Therefore, the limitations render the claim indefinite.
Claim 25 recites the limitation "the test" in line 15. There is insufficient antecedent basis for this limitation in the claim and it is unclear what applicant is referring to as “the test”.
Claims 26-30 are rejected by virtue of their dependency from claim 25.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 25-38 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 25 recites a computer system for performing a process, the process including the steps of receive and synchronize raw multimodal data; pre-process the collected multimodal data and produce a joint data representation vector over a defined time window; predict the behavioral performance of the special-need student; and providing personalized insights into learning. The recited steps, under their broadest reasonable interpretation, are collecting and synchronizing multimodal data, pre-processing the data, producing a data vector, and predicting the behavioral performance of the student. The recited steps, as drafted, are a process that is a method of applying an abstract idea, specifically mental processes (evaluation (preprocessing the data and producing a data vector, judgement (predicting the performance of the student), observation (collecting and synchronizing multimodal data), opinion (providing personalized insights into learning) and/or certain methods of organizing human activity (predicting the behavioral performance of the student; providing personalized insights into learning). If claim limitations, under their broadest reasonable interpretation, include a mental process and/or certain methods of organizing human activity, the limitations fall under the abstract ideas judicial exception and therefore recite ineligible subject matter. Accordingly, claim 25 recites an abstract idea.
The judicial exception is not integrated into a practical application because the claim does not recite additional elements that are significantly more than the judicial exception or meaningfully limit the practice of the judicial exception. The additional elements are at least one processor; a cloud server coupled to said at least one processor having a real-time processing capability, said cloud server having three interconnected building blocks implemented as programmable instructions; a plurality of heterogenous sensing devices; a multimodal data collection module; a multimodal data fusion module; using an optimized machine learning module, the machine learning module configured to train and cross-validate the test and predict the behavioral performance of the special- need student using the joint representation data vector, said third building block providing personalized insights into learning, wherein the plurality of sensing devices comprises: an IoT sensor box; a plurality of environmental sensors in the IoT sensor box, said the plurality of environmental sensors measuring carbon dioxide concentration, relative humidity, indoor temperature and light intensity at one Hz sampling rate; a plurality of wearable physiological sensors that measure blood volume pulse at 64Hz, galvanic skin response at 4 Hz, skin temperature at 4 Hz, and inter-beat intervals derived from BVP signals; and a plurality of motion sensors that capture three-dimensional hand movement data at 32Hz with a total acceleration calculation, wherein the system is configured for Applied Behavioral Analysis (ABA) therapy sensors and to incorporate artificial intelligence with assessment markers with automatic time-stamping, wherein the machine learning module has individualized multimodal predictive modeling that combines personalized categorical special-needs data with the multimodal continuous sensor data in order to output behavioral classification targets with real-time feedback optimization. The additional elements are merely instructions for applying the judicial exception with a generic computing device as, under their broadest reasonable interpretation, the additional elements of at least one processor, a cloud server, a plurality of sensing devices, an IoT sensor box, a machine learning model, and building blocks in the form of programmable instructions are generic computer components for performing the above method, per MPEP 2106.05(f). Further, the limitations of modules and using a machine learning module are recited at a high level of generality amounting to computer code/algorithms for performing the judicial exceptions with the computing device. Under their broadest reasonable interpretation, the additional elements are generic components and instructions of a computing device used to apply the abstract idea. Further, paragraph 0033 of the specification states the one or more processors is at least one computing device with internet access such as a PC or tablet. With regard to the training and implementation of the machine learning model, the courts held in Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025) that training generic machine learning models is insufficient to amount to a practical application or significantly more. Further, the sensors, and the ML model, are generic as evidenced by the specification lacking any specific embodiments, types, or algorithms for performing the steps and function. While the original disclosure supports generic “IoT sensors” for performing environmental and physiological data gathering, the disclosure is not specific and amounts to generic sensors that are not in a specific configuration or implementation. Further, the recitations of the sensors and machine learning model are interpreted as attempting to merely generally link the claimed invention to computing technology and the technical field of machine learning thereby not amounting to a practical application or significantly more. As such, these additional elements are interpreted as merely instructions to apply the judicial exception. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limitations on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above, the additional elements used to perform the process are generic computing components/device and programs used to apply the judicial exception and therefore fall under the “apply it” limitation of the judicial exception and do not amount to significantly more per MPEP 2106.05(f). Further, the limitations, taken in combination, add nothing that is not already present when looking at the elements taken individually. As such, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, under their broadest reasonable interpretation, the additional elements do not meaningfully limit the practice of the abstract idea and do not amount to significantly more than the judicial exceptions. Therefore, claim 25 is not directed to eligible subject matter as it is an abstract idea without significantly more.
Claims 26-30 are dependent from claim 25 and include all the limitations of the independent claim. Therefore, the dependent claims recite the same abstract idea. The limitations of the dependent claims fail to amount to significantly more than the judicial exception. For example:
The limitations of claims 28 and 29 recite further instructions for applying the judicial exceptions with a generic computing device such as the computing device having internet access and being selected from a personal computer and a tablet and the cloud server storing and processing the data and learning through a generic feedback module/loop. The limitations fail to provide any teaching that integrates the judicial exceptions into a practical application or amounts to significantly more than the judicial exceptions. For this reason, the analysis performed on the independent claims is also applicable on these claims.
The limitations of claims 26 recite further abstract ideas including creating multi-modality and multi-temporal sensing datasets containing time instances of student sensor data to predict missing sensor data values within a therapy session (evaluation MP) and synchronizing unimodal measurements using a timestamp consistency algorithm (evaluation MP). As the limitations are further abstract ideas, the limitations cannot meaningfully limit or amount to significantly more than the abstract ideas of the independent claims. The additional elements of the dependent claims are further instructions for applying the judicial exceptions with a generic computing device including further modules and using a machine learning model/deep neural network to project unimodal representation into a unified multimodal space using hidden layer. The limitations fail to provide any teaching that integrates the judicial exceptions into a practical application or amounts to significantly more than the judicial exceptions. For this reason, the analysis performed on the independent claims is also applicable on these claims.
The limitations of claims 27 and 30 recite clarification of the data included. The limitations, under their broadest reasonable interpretation, are merely defining/selecting a type of data to be manipulated which, per MPEP 2106.05(g). The limitations fail to provide any teaching that integrates the judicial exceptions into a practical application or amounts to significantly more than the judicial exceptions. For this reason, the analysis performed on the independent claims is also applicable on these claims.
Accordingly, claims 26-30 recite abstract ideas without significantly more and are not drawn to eligible subject matter.
Claim 31 recites a process, the process including the steps of collecting real-time multimodal data; placing a plurality of heterogenous sensing devices onto the special-need student and into a classroom environment of the special-need student; capturing the classroom environmental data and physiological data and motion data; pre-processing the collected real-time multimodal data and producing a joint data representation vector over a defined time window; translating the retrieved special needs data; formulating an embedded vector for the special needs data; aligning the multimodal data to ensure timestamp consistency of each retrieved special needs data; providing at least one stimulus and prompt using an assessment marker and tagging a timestamp to each data inputted; formulating an input vector by combining the embedded vector for the special needs data and the translated retrieved data; and predicting the behavioral performance of the special-need student. The recited steps, under their broadest reasonable interpretation, are collecting multimodal data, placing sensors on a student and in a classroom of the student, capturing the data, pre-processing the data, producing a data vector, translating retrieved data, formulating a vector for the data, aligning the data based on timestamps, providing at least one stimulus or prompt and tagging a timestamp to the data, formulating an input vector based on the embedded vector and the translated data, and predicting the behavioral performance of the student. The recited steps, as drafted, are a process that is a method of applying an abstract idea, specifically mental processes (evaluation (preprocessing the data and producing a data vector, translating the retrieved data, formulating an embedded vector, aligning the data, formulating an input vector), judgement (placing the sensors, predicting the performance of the student), observation (collecting multimodal data, capturing the data, tagging the data), opinion (providing at least one stimulus and prompt)) and/or certain methods of organizing human activity (placing the sensors on the student and in the classroom, predicting the behavioral performance of the student). If claim limitations, under their broadest reasonable interpretation, include a mental process and/or certain methods of organizing human activity, the limitations fall under the abstract ideas judicial exception and therefore recite ineligible subject matter. Accordingly, claim 31 recites an abstract idea.
The judicial exception is not integrated into a practical application because the claims do not recite additional elements that are significantly more than the judicial exception or meaningfully limit the practice of the judicial exception. The additional elements are transmitting the real-time multimodal data via a wireless connection to a multimodal data collection module; via a multimodal data fusion module; retrieving existing special needs data; via a multimodal translation module; via a multimodal data alignment module; applying a deep neural network to the input vector so as to produce the joint data representation vector and projecting into a multimodal space for subsequent analysis; using a machine learning module, wherein the step of predicting comprises: feeding the predicted behavioral performance back to the machine learning module via a feedback module. The additional elements are insignificant extra-solution activity and instructions for applying the judicial exception with a generic computing device as, under their broadest reasonable interpretation, the additional elements of transmitting and retrieving data via a wireless connection is transmitting or receiving data over a network which is WURC and insignificant extra-solution activity, per MPEP 2106.05(d). Further, the limitations of various modules, applying a deep neural network, and using a machine learning module are recited at a high level of generality amounting to computer code/algorithms for performing the judicial exceptions with the computing device. Under their broadest reasonable interpretation, the additional elements are generic components and instructions of a computing device used to apply the abstract idea. Further, paragraph 0033 of the specification states the one or more processors is at least one computing device with internet access such as a PC or tablet. With regard to the feedback step, courts have held in Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025) that training generic machine learning models is insufficient to amount to a practical application or significantly more. As such, these additional elements are interpreted as merely instructions to apply the judicial exception. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limitations on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above, the additional elements are insignificant extra solution activity and instructions for performing the process on generic computing components/device and programs used to apply the judicial exception and therefore fall under the “apply it” limitation of the judicial exception and do not amount to significantly more per MPEP 2106.05(f). Further, the limitations, taken in combination, add nothing that is not already present when looking at the elements taken individually. As such, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, under their broadest reasonable interpretation, the additional elements do not meaningfully limit the practice of the abstract idea and do not amount to significantly more than the judicial exceptions. Therefore, claim 31 is not directed to eligible subject matter as it is an abstract idea without significantly more.
Claims 32-38 are dependent from claim 31 and include all the limitations of the independent claim. Therefore, the dependent claims recite the same abstract idea. The limitations of the dependent claims fail to amount to significantly more than the judicial exception. For example:
The limitations of claim 37 recites further instructions for applying the judicial exceptions with a generic computing device such as the use of a deep neural network including processing the input vector through hidden layers, utilizing a penultimate layer, and applying an output activation function which are recited at a high level of generality amounting to a mere computer algorithm for performing the abstract ideas. The limitations fail to provide any teaching that integrates the judicial exceptions into a practical application or amounts to significantly more than the judicial exceptions. For this reason, the analysis performed on the independent claims is also applicable on these claims.
The limitations of claims 32, 33, 34, 35, 36, and 38 recite further abstract ideas including creating a multi-modality and multi-temporal sensing dataset (evaluation MP), predicting missing data (evaluation MP), synchronizing measurements and producing a single and coherent dataset (evaluation MP), selecting data (judgement MP), selecting the student based on learning disabilities (judgement MP; CMOHA), continuing, pausing, and repeating the step of providing stimulus and prompt (judgement MP; CMOHA), and utilizing the vector to perform the behavioral prediction (evaluation MP). As the limitations are further abstract ideas, the limitations cannot meaningfully limit or amount to significantly more than the abstract ideas of the independent claims. The additional elements of the dependent claims are further insignificant extra-solution activities including defining the data comprising a dataset. The limitations fail to provide any teaching that integrates the judicial exceptions into a practical application or amounts to significantly more than the judicial exceptions. For this reason, the analysis performed on the independent claims is also applicable on these claims.
Accordingly, claims 32-38 recite abstract ideas without significantly more and are not drawn to eligible subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 25 and 27-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sait et al. (US PGPub 20250148931), hereinafter referred to as Sait, in view of Oddo et al. (US PGPub 20210386291), hereinafter referred to as Oddo, further in view of Sahin (US PGPub 20150099946), and further in view of Mcleod et al. (US PGPub 20170182362), hereinafter referred to as Mcleod.
With regard to claim 25, Sait teaches a system for predicting behavioral performance of a special-need student (Paragraphs 0019-0020, 0069 teach a system for personalizing education for students including special needs students including predicting student behavior) comprising:
at least one processor (Paragraph 0032; “CPU”); and
a cloud server coupled to said at least one processor having a real-time processing capability (Paragraph 0032, 0063, 0070, 0086 teach the system includes user devices and servers wherein the servers may be cloud-based which process data and perform modifications in real-time), said cloud server having three interconnected building blocks implemented as programmable instructions (Paragraphs 0032, 0086; “modules of the system”), building blocks comprising:
a first building block configured to receive and synchronize raw multimodal data from a plurality of heterogenous sensing devices in real-time (Paragraphs 0030, 0043, 0045, 0051-0052, 0056 teach the system acquires and receives user data from a variety of sources including a plurality of IoT devices and sensors which include various different (heterogenous) devices and sensors and harmonizes and normalizes (synchronizes) the data from the various sources), said first building block being a multimodal data collection module (Paragraphs 0051-0052, 0086 teach the system includes receiving IoT data from a plurality of sources using a modular data integration framework);
a second building block configured to pre-process the collected multimodal data and produce a joint data representation vector over a defined time window (Paragraphs 0057, 0065, 0067 teach the system can perform data amalgamation to aggregate and fuse the received data into foundational vectors and neural representations (joint data representation vectors) in real-time and over periods of time), said second building block being a multimodal data fusion module (Paragraphs 0057, 0086 teach the system includes the AI model performing a process of data amalgamation using one or more fusion algorithms).;
a third building block configured to predict the behavioral performance of the special-need student using an optimized machine learning module (Examiner notes that the term “optimized” is interpreted as a trained ML module; Paragraphs 0028, 0032, 0040, 0069, 0071 teach the system includes a trained composite AI model that can be used to make predictions and identify patterns in student behavior including a user’s mental and physical wellbeing), the machine learning model configured to train and cross-validate the test (Paragraphs 0038-0040, 0045, 0069 teach the model may be trained, validated, and tested in order to develop the model to predict user behavior) and predict the behavioral performance of the special-need student using the joint representation data vector (Paragraphs 0032, 0038, 0057-0059, 0063, teach the model can be used to make predictions and identify patterns in student behavior including a user’s mental and physical wellbeing based in part on the neural representations and foundational vectors created by the analysis of the AI model), said third building block providing personalized insights into learning (Paragraphs 0037, 0058-0059 teach the system may us the AI model to provide personalized insights and recommendations and generate a personalized profile), wherein the plurality of sensing devices comprises:
an IoT sensor box (Paragraph 0051 teach the system can include IoT devices including classroom IoT devices);
a plurality of environmental sensors in the IoT sensor box (Paragraph 0051 teaches the system includes sensors including sensors for capturing environmental conditions of a classroom using classroom IoT devices);
a plurality of wearable physiological sensors (Paragraph 0051 teaches the system includes sensors including wearables such as a Fitbit or Apple Watch);
a plurality of motion sensors that capture movement data (Paragraphs 0043, 0063, 0076 teach the system can include a plurality of cameras/motion sensors that can capture movement or body language of the user), wherein the system is configured for Applied Behavioral Analysis (ABA) therapy sensors (Examiner notes that ABA therapy sensors is not defined or a term of art so the limitation is interpreted as the environmental, wearable, and motion sensors; Paragraphs 0043, 0051, 0063 teach the system includes a plurality of sensors), wherein the machine learning module has individualized multimodal predictive modeling that combines personalized categorical special-needs data with the multimodal continuous sensor data in order to output behavioral classification targets with real-time feedback optimization (Paragraphs 0051, 0071, 0077-0078, 0081 teaches the system provides personalized insights and analysis for the user based on the gathered data including the “multimodal” sensor data including student, classroom, and environment data (categorical special-needs data) in order to create a personalized education and tasks (behavioral classification targets) including optimizing the model based on real-time user feedback).
Sait may not explicitly teach said the plurality of environmental sensors measuring carbon dioxide concentration, relative humidity, indoor temperature and light intensity at one Hz sampling rate; the physiological sensors that measure blood volume pulse at 64Hz, galvanic skin response at 4 Hz, skin temperature at 4 Hz, and inter-beat intervals derived from BVP signals; capture three-dimensional hand movement data at 32 Hz with a total acceleration calculation, wherein the system is configured to incorporate artificial intelligence with assessment markers with automatic time-stamping.
However, Oddo teaches a system and method for cloud-based user physiology detection using a wearable device including CO2 sensors, blood pressure, temperature sensors, and humidity sensors wherein the sensors are used to determine physical and health data of the user which would include CO2 concentration, relative humidity, body/skin temperature, heart rate, pulse including heart beat irregularities (inter-beat intervals), and skin conductivity (galvanic skin response) (Abstract; Paragraphs 0029, 0042, 0047, 0052, 0056).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait to incorporate the teachings of Oddo by substituting the sensors of Oddo for the sensors of Sait, as, while the references are directed to different fields of endeavor, one of ordinary skill would have found it obvious to apply the teachings as substituting the sensors would result in the expected improvement/result of gathering further physical and health data on the user. One of ordinary skill in the art would modify Sait by substituting and/or including the CO2, temperature, and humidity sensors as sensors of the IoT/wearable devices and acquiring the corresponding health and physical data. Upon such modification, the method and system of Sait would include said the plurality of environmental sensors measuring carbon dioxide concentration, relative humidity, and indoor temperature; the physiological sensors that measure blood volume pulse, galvanic skin response, skin temperature, and inter-beat intervals derived from BVP signals. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Oddo with Sait’s system and method in order to gather further user data and provide a more comprehensive user profile.
Sait in view of Oddo may not explicitly teach measuring light intensity; capture three-dimensional hand movement data with a total acceleration calculation, wherein the system is configured to incorporate artificial intelligence with assessment markers with automatic time-stamping. However, Sahin teaches a system and method for evaluating an individual for ASD using a wearable data collection device including using a camera or light sensor to collect light monitoring data and using wearable sensors and or cameras to capture user motion data including hand motion based on a motion analysis algorithm (total acceleration calculation) wherein the gathered data is timestamped (Paragraphs 0048, 0094, 0112, 0181, 0238).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Oddo to incorporate the teachings of Sahin by incorporating the teaching of measuring light intensity, capturing user motion including hand motion, and timestamping the user data of Sahin to the sensors and acquired user data of Sait, as the references and the claimed invention are directed to user monitoring systems using biological data to predict user behavior. One of ordinary skill in the art would modify Sait by coding the system to include gathering light intensity data and user motion data including hand motion using the accelerometers, gyroscopes, and related wearable sensors and timestamping the corresponding data for use by the machine learning/AI of Sait for real-time analysis. Upon such modification, the method and system of Sait in view of Oddo would include measuring light intensity; capture three-dimensional hand movement data with a total acceleration calculation, wherein the system is configured to incorporate artificial intelligence with assessment markers with automatic time-stamping. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Sahin with Sait in view of Oddo’s system and method in order to gather further user data and evaluate a user/subjects wellbeing and conditions.
Sait in view of Oddo and Sahin may not explicitly teach the various sampling rates of the sensors and data gathering including sampling environmental sensors at one Hz sampling rate; BVP at 64 Hz, galvanic skin response at 4 Hz, skin temperature at 4 Hz, and hand movement data at 32 Hz. Sampling rates of sensors and data gathering are well known in the art and are a design choice that would be obvious to one of ordinary skill in the art as evidenced by Mcleod. Mcleod teaches a wearable monitor for monitoring user movement and health data using a sensor suite which samples the sensors at a predefined sampling rate such as 200 Hz but can include sampling rates less than 50 Hz, above 10 Hz, and less than 100 Hz wherein a higher sampling rate impacts the accuracy and detail of the gathered data but at the cost of higher CPU consumption, greater memory usage, and increased data transmission rates (Paragraphs 0053, 0055, 0062, 0104).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Oddo and Sahin to incorporate the teachings of Mcleod by incorporating the teaching of selecting an appropriate sampling rate for the design requirements of Mcleod for the sensors of Sait, as the references and the claimed invention are directed to user monitoring systems using biological data. One of ordinary skill in the art would modify Sait by coding the system to include a sampling rate that would make sense to one of ordinary skill in the art and would include the claimed sampling rates below 50 or 100 Hz or above 10 Hz in order to affect the accuracy and detail of the data and the CPU and data transmission usage. Upon such modification, the method and system of Sait in view of Oddo and Sahin would include sampling environmental sensors at one Hz sampling rate; BVP at 64 Hz, galvanic skin response at 4 Hz, skin temperature at 4 Hz, and hand movement data at 32 Hz. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Mcleod with Sait in view of Oddo and Sahin’s system and method as such a design choice would have been obvious to one of ordinary skill in the art as it is well known that different sampling rates impact the volume, detail, and accuracy of sensor data and thereby impact processor usage, transmission rates, and memory usage.
With regard to claim 27, Sait further teaches wherein the collected multimodal data are individualized categorical variables of the special-needs student that couples to special needs data, classroom environment data, physiological data, and motion data (Paragraphs 0021, 0043, 0051 teach the acquired data includes academic data including diagnostic test results and non-academic data including personal data, emotional data, physical data (including physical activity (motion) and health data), and environmental conditions/data).
With regard to claim 28, Sait further teaches wherein said at least one processor is at least one computing device with internet access, the at least one computing device being selected from the group consisting of a personal computer and a table (Paragraphs 0030-0031, 0033 teach the devices may include personal computers/PC and tablets and other networked devices capable of accessing the internet).
With regard to claim 29, Sait further teaches wherein the cloud server is configured to store and process multimodal data for model refinement and continuous learning through a feedback module (Paragraphs 0026-0027, 0041, 0052 teach the system stores the user data and continuously updates and refines the model and profiles including through user feedback).
With regard to claim 30, Sait further teaches wherein the special needs data comprises categorical variable selected from the group consisting of school identifiers, student identifiers, and learning task identifiers (Paragraphs 0023, 0026, 0045, 0048, 0050-0051 teach the gathered data that is part of the students personalized profile (special needs data) includes classroom/environment data (school identifiers), student data including physiological data, and student test/task performance data (task identifiers)).
Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sait in view of Oddo, Sahin, and Mcleod, as applied to claim 25 above, and further in view of Coleman et al. (US PGPub 20150199010), hereinafter referred to as Coleman, and further in view of Varma et al. (US PGPub 20250200430), hereinafter referred to as Varma.
With regard to claim 26, Sait further teaches wherein the multimodal data fusion module further comprising: a multimodal data alignment module that synchronizes unimodal measurements (Paragraphs 0045, 0057 teach the system includes data amalgamation including harmonizing and normalizing (synchronizing) the data into a consistent format); a deep neural network joint representation module that projects unimodal representation into a unified multimodal space using hidden layer in which a penultimate layer maps an output (Paragraphs 0022, 0057, 0066-0067 teach the composite AI may include a DNN model for processing the input data including hidden layers, which would include a penultimate layer, on the normalized (unified) data), but Sait in view of Oddo, Sahin, and Mcleod may not explicitly teach a multimodal translation module that creates multi-modality and multi-temporal sensing datasets containing time instances of student sensor data to predict missing data values within a therapy session; and the multimodal data alignment module using a timestamp consistency algorithm. However, Coleman teaches a system and method for improving operation of one or more biofeedback computer systems including education monitoring (Paragraphs 0387-0388) wherein data gathered from a plurality of sensors can be synchronized (temporally aligned) including using algorithms to determine missed samples/missing data based in part on the timestamped data (Paragraphs 0082, 0139, 0190-0192, 0238).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Oddo, Sahin, and Mcleod to incorporate the teachings of Coleman by applying the technique of synchronizing user data using timestamps in order to created synchronized (unified, multi-modality, and multi-temporal) dataset including determining missed samples during a session of Coleman to the acquired user data of Sait, as the references and the claimed invention are directed to educational monitoring systems. While Coleman uses a different sensor type (EEG), Coleman teaches the data may be from a plurality of sensors including wearable and external sensors collecting bio-signal and non-bio-signal data (Coleman Paragraph 0050). One of ordinary skill in the art would modify Sait by coding the system to timestamp the acquired and retrieved data and synchronizing the data in order to improve data analysis by further synchronizing the data based on the timestamps and predicting missing data values. Upon such modification, the method and system of Sait in view of Oddo, Sahin, and Mcleod would include a multimodal translation module that creates multi-modality and multi-temporal sensing datasets containing time instances of student sensor data to predict missing data values within a therapy session; and the multimodal data alignment module using a timestamp consistency algorithm. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Coleman with Sait in view of Oddo, Sahin, and Mcleod’s system and method in order to improve data processing and improve the machine learning model by improving the inputted data and predicting missing values.
Sait in view of Oddo, Sahin, Mcleod, and Coleman may not explicitly teach through an activation function. However, Varma teaches a system and method for assisted learning including personalized learning using a DNN including activation functions to introduce non-linearities to simplify the data and generate a feature map (Paragraphs 0088, 0147).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Oddo, Sahin, Mcleod, and Coleman to incorporate the teachings of Varma by applying the technique of using activation functions to one or more layers of the DNN including a penultimate layer to map the data of Varma to the DNN model of Sait, as the references and the claimed invention are directed to learning management systems using user data to predict user behavior. One of ordinary skill in the art would modify Sait by coding the DNN model to include activation functions to introduce non-linearities to simplify the data. Upon such modification, the method and system of Sait in view of Oddo, Sahin, Mcleod, and Coleman would include through an activation function. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Varma with Sait in view of Oddo, Sahin, Mcleod, and Coleman’s system and method in order to simplify the data and generate a feature map as activation functions are well known features of DNN models.
Claim(s) 31-36 and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sait in view of Coleman and Sahin.
With regard to claim 31, Sait teaches a method for predicting behavioral performance of a special-need student using artificial intelligence (Paragraphs 0019-0020, 0069 teach a method for personalizing education for students including special needs students including predicting student behavior), the method comprising:
collecting real-time multimodal data (Paragraphs 0030, 0043, 0051, 0056 teach the system acquires and receives user data from a variety of sources including a plurality of IoT devices and sensors), the step of collecting real-time multimodal data comprising:
placing a plurality of heterogenous sensing devices onto the special-need student and into a classroom environment of the special-need student (Paragraph 0051 teaches the IoT devices can include wearables on the user/student and devices integrated or incorporated as part of the classroom);
capturing the classroom environmental data and physiological data and motion data (Paragraphs 0021, 0043, 0051 teach the acquired data includes academic data including diagnostic test results and non-academic data including personal data, emotional data, physical data (including physical activity (motion) and health data), and environmental conditions/data); and
transmitting the real-time multimodal data via a wireless connection to a multimodal data collection module (Paragraphs 0030, 0053 teach the system may transmit the data between various components such as transmitting the data from the sensors to the server over a communications network (wireless));
pre-processing the collected real-time multimodal data via a multimodal data fusion module and producing a joint data representation vector over a defined time window (Paragraphs 0057, 0065, 0067 teach the system can perform data amalgamation to aggregate and fuse the received data into foundational vectors and neural representations (joint data representation vectors) in real-time and over periods of time), wherein the step of pre-processing comprises:
retrieving existing special needs data (Paragraphs 0049, 0058, 0065, 0067 teach the system retrieves user data including past data and historical actions and performance);
translating the retrieved special needs data via a multimodal translation module (Paragraphs 0045, 0052, 0057 teach the data is normalized and harmonized (translated) into a consistent format);
formulating an embedded vector for the special needs data (Paragraph 0057 teaches the system can generate neural representations as foundational vectors wherein the data is processed using feature extraction thereby generating a numerical representation of the data (an embedding vector)); and
applying a deep neural network (DNN) to the input vector so as to produce the joint data representation vector and projecting into a multimodal space for subsequent analysis (Paragraphs 0022, 0057, 0066-0067 teach the composite AI may include a DNN model for processing the input data and generating the vector)
predicting the behavioral performance of the special-need student using a machine learning module (Paragraphs 0028, 0032, 0040, 0069, 0071 teach the system includes a trained composite AI model that can be used to make predictions and identify patterns in student behavior including a user’s mental and physical wellbeing), wherein the step of predicting comprises:
feeding the predicted behavioral performance back to the machine learning module via a feedback module (Paragraphs 0027, 0037, 0041, 0047, 0070 teach the system can improve and refine the model by using a feedback loop including the user feedback and model performance including the predictions).
Sait may not explicitly teach aligning the multimodal data to ensure timestamp consistency each retrieved special needs data via a multimodal data alignment module; providing at least one stimulus and prompt using an assessment marker via a computing device and automatically tagging a timestamp to each data inputted; and formulating an input vector by combining the embedded vector for the special needs data and the translated retrieved data. However, Coleman teaches a system and method for improving operation of one or more biofeedback computer systems including education monitoring (Paragraphs 0387-0388) wherein data gathered from a plurality of sensors can be synchronized (temporally aligned) using timestamps of the collected data (Paragraphs 0139, 0190-0192, 0238).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait to incorporate the teachings of Coleman by applying the technique of synchronizing user data using timestamps in order to generate a synchronized (unified) dataset of Coleman to the acquired user data of Sait, as both references and the claimed invention are directed to educational monitoring systems. While Coleman uses a different sensor type (EEG), Coleman teaches the data may be from a plurality of sensors including wearable and external sensors collecting bio-signal and non-bio-signal data (Coleman Paragraph 0050). One of ordinary skill in the art would modify Sait by coding the system to timestamp the acquired and retrieved data and synchronizing the data in order to improve data analysis wherein Sait would then use the translated and aligned dataset with the composite AI model to generate the neural representations and vectors (formulating an input vector). Upon such modification, the method and system of Sait would include aligning the multimodal data to ensure timestamp consistency each retrieved special needs data via a multimodal data alignment module; and formulating an input vector by combining the embedded vector for the special needs data and the translated retrieved data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Coleman with Sait’s system and method in order to improve data processing and improve the machine learning model by improving the inputted data.
Coleman further teaches prompting and providing stimuli to a user in order to gather further user data (Paragraphs 0167, 0169, 0245). Sait in view of Coleman may not explicitly teach providing at least one stimulus and prompt using an assessment marker via a computing device and automatically tagging a timestamp to each data inputted. However, Sahin teaches a system and method for evaluating an individual for ASD using a wearable data collection device wherein a user partakes in a session with a caregiver wherein prompts and stimuli are provided to the subject and the session data is captured including timestamping (Paragraphs 0046, 0053, 0055, 0094, 0116).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Coleman to incorporate the teachings of Sahin by incorporating the teaching of performing evaluation sessions including stimuli and prompts and timestamping the user data by a caregiver/human expert of Sahin to the acquired user data of Sait, as both references and the claimed invention are directed to user monitoring systems using biological data to predict user behavior. One of ordinary skill in the art would modify Sait by coding the system to include providing teachers/instructors the ability to perform evaluation sessions including prompts and stimuli to capture user responses and data like a question and answer session and timestamp the corresponding user data. Upon such modification, the method and system of Sait in view of Coleman would include providing at least one stimulus and prompt using an assessment marker via a computing device and automatically tagging a timestamp to each data inputted. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Sahin with Sait in view of Coleman’s system and method in order to gather further user data and evaluate a user/subjects wellbeing and conditions.
With regard to claim 32, Sait further teaches wherein the step of translating comprises: creating a multi-modality and multi-temporal sensing dataset (Paragraphs 0051, 0065 teach the system gathers data from multiple sources and sensors (multi-modality) and over time (multi-temporal)), but may not explicitly teach containing time stances of the special needs data collected in a session; and predicting missing sensor data in the session. However, Coleman teaches a system and method for improving operation of one or more biofeedback computer systems including education monitoring (Paragraphs 0387-0388) wherein data gathered from a plurality of sensors is timestamped and can be synchronized including using algorithms to determine missed samples/missing data (Paragraphs 0139, 0190-0192, 0238).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait to incorporate the teachings of Coleman by applying the technique of synchronizing user data using timestamps in order to generate a synchronized (unified) dataset including determining missed samples of Coleman to the acquired user data of Sait, as both references and the claimed invention are directed to educational monitoring systems. While Coleman uses a different sensor type (EEG), Coleman teaches the data may be from a plurality of sensors including wearable and external sensors collecting bio-signal and non-bio-signal data (Coleman Paragraph 0050). One of ordinary skill in the art would modify Sait by coding the system to timestamp the acquired and retrieved data and synchronizing the data in order to improve data analysis including determining missing data. Upon such modification, the method and system of Sait would include containing time stances of the special needs data collected in a session; and predicting missing sensor data in the session. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Coleman with Sait’s system and method in order to improve data processing and improve the machine learning model by improving the inputted data.
With regard to claim 33, Sait further teaches wherein the step of aligning comprises: synchronizing unimodal measurements (Paragraphs 0045, 0052, 0057 teach the data is normalized and harmonized (translated) into a consistent format); and producing a single and coherent dataset (Paragraphs 0045, 0052, 0057 teach the data is normalized and harmonized (translated) into a consistent format to combine in a data lake (single dataset)).
With regard to claim 34, Sait further teaches wherein the step of collecting the real-time multimodal data comprises: selecting at least one special-needs data from a behavioral task (Paragraphs 0066, 0074; personalized tasks and academic performance (behavioral tasks)), the behavioral task being selected from the group consisting of academic task, learning task, (Paragraphs 0023, 0043, 0074 teach the user data can include academic data including academic performance), sensory-motor skills (Paragraphs 0043, 0076; “facial expressions, body language”), and socio-emotional skills (Paragraph 0028, 0043; “emotional data”).
With regard to claim 35, Sait further teaches the student may be special needs (Paragraph 0020), but Sait in view of Coleman may not explicitly teach wherein the step of collecting the real-time multimodal data comprises: selecting a student being diagnosed with learning disabilities, the learning disabilities being autism spectrum disorder or intellectual disability. However, Sahin teaches a system and method for evaluation of an individual for ASD including providing training/education to the user (Abstract; Paragraphs 0014, 0064).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Coleman to incorporate the teachings of Sahin by including students with ASD as taught by Sahin as the user of Sait, as the references and the claimed invention are directed to systems for predicting user behavior based on sensor data. ASD is a well-known disability for learners/students with special needs and one of ordinary skill in the art would have found it obvious to provide personalized education and evaluation of a student with ASD or other intellectual disabilities in order to provide tailored learning for each student’s unique challenges and strengths (Sait Paragraph 0020). Upon such modification, the method and system of Sait in view of Coleman would include wherein the step of collecting the real-time multimodal data comprises: selecting a student being diagnosed with learning disabilities, the learning disabilities being autism spectrum disorder or intellectual disability. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Sahin with Sait in view of Coleman’s system and method in order to provide personalized learning/education and evaluation to students in need including those with ASD.
With regard to claim 36, Sait in view of Coleman may not explicitly teach wherein the step of providing at least one stimulus and prompt comprises: continuing or pausing the step of providing the at least one stimulus or prompt according to a condition of the special-need student based on the data captured by the plurality of sensing devices; and repeating the step of providing the at least one stimulus and prompt until the special-need student provides a correct response or until the session ends. However, Sahin further teaches pausing/suspending an ongoing evaluation session to allow an individual to take a break and repeating a phase/process to completion or the session ends (Paragraphs 0058, 0069, 0093, 0096).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Coleman to incorporate the teachings of Sahin by incorporating the teaching of performing evaluation sessions including stimuli and prompts wherein the sessions can be paused and repeated of Sahin to the acquired user data of Sait, as both references and the claimed invention are directed to user monitoring systems using biological data to predict user behavior. One of ordinary skill in the art would modify Sait by coding the system to include providing teachers/instructors the ability to perform evaluation sessions including prompts and stimuli to capture user responses and data like a question and answer session and timestamp the corresponding user data including pausing the session based on the student’s data gathered by the sensors to allow the student to take a break and resume a session when able to continue and repeating the process until a session ends/is completed. Upon such modification, the method and system of Sait in view of Coleman would include wherein the step of providing at least one stimulus and prompt comprises: continuing or pausing the step of providing the at least one stimulus or prompt according to a condition of the special-need student based on the data captured by the plurality of sensing devices; and repeating the step of providing the at least one stimulus and prompt until the session ends. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Sahin with Sait in view of Coleman’s system and method in order to gather further user data and evaluate a user/subjects wellbeing and conditions.
With regard to claim 38, Sait further teaches wherein the step of predicting the behavioral performance comprises: utilizing the joint data representation vector to perform the behavioral prediction of the performance of the special-need student (Paragraphs 0057, 0069, 0071 teach the vectors are used to establish the dynamic profiling system which is used to predict the user behavior and performance).
Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sait in view of Coleman and Sahin, as applied to claim 31 above, and further in view of Varma.
With regard to claim 37, Sait further teaches wherein the step of applying the deep neural network comprises: processing the input vector through hidden layers of the deep neural network (Paragraph 0066 teaches the system processes the input data through hidden layers); utilizing a penultimate layer of the deep neural network (Paragraph 0066 teaches the system processes the input data through hidden layers which would include a penultimate layer); but Sait in view of Coleman and Sahin may not explicitly teach applying an output activation function to the penultimate layer in order to map the joint data representation vector in order to output the joint data representation vector. However, Varma teaches a system and method for assisted learning including personalized learning using a DNN including activation functions to introduce non-linearities to simplify the data and generate a feature map (Paragraphs 0088, 0147).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sait in view of Coleman and Sahin to incorporate the teachings of Varma by applying the technique of using activation functions to one or more layers of the DNN including a penultimate layer to map the data of Varma to the DNN model of Sait, as both references and the claimed invention are directed to learning management systems using user data to predict user behavior. One of ordinary skill in the art would modify Sait by coding the DNN model to include activation functions to introduce non-linearities to simplify the data. Upon such modification, the method and system of Sait in view of Coleman and Sahin would include applying an output activation function to the penultimate layer in order to map the joint data representation vector in order to output the joint data representation vector. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate these teachings from Varma with Sait in view of Coleman and Sahin’s system and method in order to simplify the data and generate a feature map as activation functions are well known features of DNN models.
Response to Arguments
Applicant's arguments filed October 13, 2025, with respect to the rejection(s) of claim(s) 1-22 under 35 U.S.C. 101, have been fully considered but they are not persuasive. Examiner notes that Applicant’s arguments are not separated into sections, but it appears the remarks in the last paragraph of page 12 are directed to the 35 U.S.C. 101 rejection(s). Applicant contends that the claimed invention is eligible subject matter as it is directed to significantly more than the judicial exceptions and a practical application by implementing the claimed invention with “physical components”. This argument is not persuasive as “physical components” do not inherently render claims eligible subject matter, especially mere “processors” and “cloud servers” which are generic computing components/devices for implementing the judicial exceptions per MPEP 2106.05(f). Further, the “plurality of sensors” are generic and not specifically defined or detailed in the claims or specification. Therefore, the sensors and IoT sensor box are further generic computing components for applying the exceptions and attempts to merely generally link the claimed invention with IoT technology/technical field. The lack of specificity and detail in the specification with regard to the sensors is further evidence of the generic nature of the sensors and lack of specific implementation or amounting to a particular machine, specific configuration, or specialized machine. Therefore, as discussed above, the claims stand rejected under 35 U.S.C. 101.
Applicant’s arguments, see Remarks, filed October 13, 2025, with respect to the rejection(s) of claim(s) 1-22 under 35 U.S.C. 102 and 103 have been fully considered and are persuasive by virtue of Applicant’s amendments to the claims. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 U.S.C. 103 in view of the combination of prior art discussed above. Examiner notes that Applicant’s arguments are not separated into sections, but it appears the remarks on pages 13-14 are directed to the prior art rejections. Examiner finds these arguments are largely summary of the amendment and claimed invention. While parts are not entirely commensurate with the claim language, for instance the invention providing “fault tolerance” and “edge-based resilience capability” which are not commensurate with the recited claims, there is nothing substantive for the examiner to rebut as Applicant has failed to point out the differences from the prior art beyond conclusory statements.
Conclusion
Accordingly, claims 25-38 are rejected and claim 39 is withdrawn.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CORRELL T FRENCH whose telephone number is (571)272-8162. The examiner can normally be reached M-Th 7:30am-5pm; Alt Fri 7:30am-4pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kang Hu can be reached at (571)270-1344. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CORRELL T FRENCH/Examiner, Art Unit 3715
/KANG HU/Supervisory Patent Examiner, Art Unit 3715