Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This Office Action is in response to the Amendment filed on February 23, 2026, which paper has been placed of record in the file.
2. Claims 1-20 are pending in this application.
Terminal Disclaimer
3. The terminal disclaimer filed on February 23, 2026 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of the Patent No. 12,124,985 has been reviewed and approved. The terminal disclaimer has been recorded.
Claim Rejections - 35 USC § 101
4. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
5. Claims 1-20 are rejected under 35 U.S.C. 101 because the claim invention is directed to a judicial exception (i.e., law of nature, natural phenomenon, or abstract idea) without significantly more.
Regarding independent claim 1, which is analyzing as the following:
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites an apparatus for the generation of productivity data. Thus, the claim is to a machine, which is one of the statutory categories of invention. (Step 1: YES).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
The claim recites an apparatus for the generation of productivity data. The claim recites the steps of: generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; and generating a notification as a function of the productivity data and the time period, as drafted, is a process that, under its broadest reasonable interpretation when read in light of the Specification, covers performance of the limitations in the mind, can be practically performed by human in their mind or with pen/paper, but for the recitation of generic computer components. That is, other than reciting “a computer/processor/automatically”, nothing in the claim elements preclude the steps from practically being performed in the mind. The mere nominal recitation of generic computing devices does not take the claim limitation out of the Mental Processes grouping of abstract ideas. Thus, if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgment, opinion). See MPEP 2106.04(a)(2), subsection III.
Moreover, the claim recites “a function of the industrial data”, “a function of the attention parameter”, “a function of the engagement element”, and “a function of the productivity data”, which are directed to mathematical relationships, falls within “Mathematical Concepts” grouping of abstract ideas (mathematical relationships, mathematical formulas or equations, mathematical calculations). See MPEP 2106.04(a)(2), subsection III.
Therefore, the claim recites an abstract idea. (Step 2A, Prong One: YES).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d).
The claim recites the additional elements of “a processor”, “a memory”, “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, “display the notification using a display device”, and “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum.” The claim also recites that the steps of “generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; generating a notification as a function of the productivity data and the time period, and display the notification using a display device”, are performed by a processor.
The additional elements “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, and “display the notification using a display device” are mere data gathering and outputting recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and outputting, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering, transmitting and outputting. See MPEP 2106.05. Moreover, these additional elements do not provide any improvement to the technology, improvement to the functioning of the computer, improving the sensors/ the display device they are just merely used as general means for collecting and outputting data. It is similar to other concepts that have been identified by the courts Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48; Collecting information, analyzing it, and displaying certain results of the collection and analysis, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016).
The additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
The additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” are used to generally apply the abstract idea without placing any limits on how the machine learning model functions. Rather, these limitations only recite the outcome of “generating the engagement element to determine a content datum” and do not include any details about how the solution is accomplished. See MPEP 2106.05(f).
The additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” also merely indicate a field of use or technological environment in which the judicial exception is performed. Although the additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” limit the identified judicial exceptions “generating the engagement element to determine a content datum”, this type of limitations merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Further, the steps of “generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; generating a notification as a function of the productivity data and the time period; convert the physiological data received by the at least on biometric sensor into a machine-readable output signal and display the notification using a display device”, are recited as being performed by the processor. The processor is recited at a high level of generality. In the limitations “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal and display the notification using a display device”, the processor is using as a tool to perform the function of gathering and outputting data. In the limitations “generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; generating a notification as a function of the productivity data and the time period”, the processor is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). The additional elements recite generic computer components the processor, the memory, and software programming instructions that are recited a high-level of generality that merely perform, conduct, carry out, implement, and/or narrow the abstract idea itself. Accordingly, the additional elements evaluated individually and in combination do not integrate the abstract idea into a practical application because they comprise or include limitations that are not indicative of integration into a practical application such as adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea -- See MPEP 2106.05(f).
These additional elements do not provide any improvements to the technology, improvements to the functioning of the computer, the processor, the memory, improvements to the plurality of sensors, biometric sensor, improvements to machine learning, or other technology. They do not recite a particular machine or manufacture that is integral to the claims, and do not transform or reduce a particular article to a different state or thing.
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception (Step 2A, Prong One: YES).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole, amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As explained with respect to Step 2A, Prong Two, the additional elements of “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” are at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f).
The additional elements “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, and “display the notification using a display device” were found to be insignificant extra-solution activity in Step 2A, Prong Two, because they were determined to be insignificant limitations as necessary data gathering and outputting. However, a conclusion that an additional element is insignificant extra solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g).
As discussed in Step 2A, Prong Two above, the additional elements of “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, and “display the notification using a display device” are recited at a high level of generality. These elements amount to gathering and displaying data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely genetic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
As discussed in Step 2A, Prong Two above, the recitation of the processor to perform limitations “generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; generating a notification as a function of the productivity data and the time period, and display the notification using a display device”, amounts to no more than mere instructions to apply the exception using a generic computer component.
Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. Therefore, the claim is not patent eligible. (Step 2B: NO).
Regarding independent claim 11, Alice Corp. establishes that the same analysis should be used for all categories of claims. Therefore, independent claim 11 directed to a method, is also rejected as ineligible subject matter under 35 U.S.C. 101 for substantially the same reasons as independent method claim 1.
Regarding dependent claims 2-10 and 12-20, the dependent claims do not impart patent eligibility to the abstract idea of the independent claim. The dependent claims rather further narrow the abstract idea and the narrower scope does not change the outcome of the two-part Mayo test. Narrowing the scope of the claims is not enough to impart eligibility as it is still interpreted as an abstract idea, a narrower abstract idea.
Regarding dependent claims 2-3 and 12-13, the claims recite the additional elements wherein the plurality of sensors comprises a wearable device, and a camera, which are mere data gathering and outputting recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (See claim 1 above). Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 4 and 14, the claims recite the additional elements wherein identifying the attention parameter comprises identifying the attention parameter using an automatic speech recognition model, which are used to generally apply the abstract idea without placing any limits on how the machine learning model functions. Rather, these limitations only recite the outcome of “identifying the attention parameter” and do not include any details about how the solution is accomplished. See MPEP 2106.05(f). (See claim 1 above). Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 5-6 and 15-16, the claims simply refine the abstract idea by further reciting wherein the attention parameter comprises an eye parameter, and wherein the engagement element comprises a language parameter, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 7-8 and 17-18, the claims simply refine the abstract idea by further reciting wherein generating the engagement element comprises generating the engagement element, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Moreover, the claims recite the additional elements wherein generating the engagement element using the engagement classifier comprises: training the engagement classifier using an engagement training data…, and generating the engagement element as a function of the attention parameter using the engagement classifier, which are used to generally apply the abstract idea without placing any limits on how the engagement classifier functions. Rather, these limitations only recite the outcome of “generating the engagement element” and do not include any details about how the solution is accomplished. See MPEP 2106.05(f). These additional elements also merely indicate a field of use or technological environment in which the judicial exception is performed. Although these additional elements limit the identified judicial exceptions “generating the engagement element”, this type of limitations merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Regarding dependent claims 9-10 and 19-20, the claims simply refine the abstract idea by further reciting wherein the plurality of industrial stimuli comprises a personal interaction, and wherein the productivity data is reflected as a numerical score, that fall under the category of Mental process grouping of abstract ideas as described above in the independent claim 1. Thus, the dependent claims do not add any additional element or subject matter that provides a technological improvement (i.e., an integration into a practical application under Step 2A-Prong Two), results in the claim being directed to patent eligible subject matter or include an element or feature that is significantly more than the recited abstract idea (i.e., a technological inventive concept under Step 2B).
Therefore, none of the dependent claims alone or as an ordered combination add limitations that qualify as significantly more than the abstract idea.
Accordingly, claims 1-20 are not draw to eligible subject matter as they are directed to an abstract idea without significantly more and are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Elhawary et al. (hereinafter Elhawary, US 2022/0313118) in view of Horseman et al. (hereinafter Horsman, US 2017/0162072), and further in view of Alsahlawi et al. (hereinafter Alsahlawi, US 2021/0158207).
Regarding to claim 1, Elhawary discloses an apparatus for the generation of productivity data, wherein the apparatus comprises:
at least a processor (para [0376], Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data); and
a memory communicatively connected to the at least a processor, wherein the memory containing instructions configuring the at least a processor to:
generate a user profile from a user, wherein the user profile comprises industrial data identifying a time period, wherein the user profile is generated using a smart assessment (para [0298], the wearable devices 190 each have RFID readers to read the worker's badge or entry card and associate themselves to that worker. This may be done on a daily basis, such that each day a worker may pick up one of several available wearable devices 190 and associate that device with their own profile. Alternatively, the association may be manually created by a worker entering their name, employee ID or other unique identifying feature at a worker interface, so the device can associate the data to that specific worker; para [0155], The method then evaluates the signals further and calculates measurements of the worker wearing the device 190 for the time period during the first physical activity from the first signal segment for a time period following the initiation time (3020)); Specification, para [0018] defines “a smart assessment” as “a user profile 108 may be generated using a smart assessment. As used in this disclosure, a “smart assessment” is a set of questions that asks for user’s information as described in this disclosure. In some cases, questions within smart assessment may include selecting a selection from a plurality of selections as answers. In other cases, questions within smart assessment may include free user input as answers. Elhawary’s method allows a worker to enter his/her name, employee ID or other unique identifying feature at a worker interface (free user input as answers));
identify an attention parameter as a function of the industrial data and a plurality of industrial stimuli, wherein identifying the attention parameter comprises receiving physiological data from a plurality of sensors (para [0309], a first signal is received (9010) from a wearable device 4010 worn by the first worker and generated by dynamic activity of the wearable device over time. An initiation time for a first physical activity of a first category of physical activity performed by the first worker is then identified (9020) in the first signal, and a first signal segment is defined (9030) in the first signal, the first signal segment corresponding to the time period in which the first physical activity is performed); wherein the plurality of sensors comprises at least one biometric sensor configured to detect one or more of a heart rate datum, galvanic skin response, ocular movement, and electroencephalogram data (para [0220], the additional sensors, such as the wrist sensor 4040 may use the wearable device 4010 as a gateway for relaying information to a server, or as a centralized processing unit. Accordingly, the wrist sensor 4040 may detect information about the worker, such as pulse rate, temperature, and hydration. This information may be detected directly, or it may be derived, such as deriving dehydration by evaluating skin conductance or sweat detection. The wrist sensor 4040 may then send the data to the wearable device 4010 for analysis, and the wearable device may then provide recommendations, such as a recommendation to rest or drink water);
generate an engagement element as a function of the attention parameter and the time period (para [0310], Measurements of the first worker are calculated (9040) for a time period during the first physical activity. Such measurements are derived from and calculated based on data in the first signal segment);
determine productivity data as a function of the engagement element (para [0311], The measurements are then used to calculate (9050) an activity risk metric from a risk model based on the measurements of the wearer for the time period during the first physical activity, wherein the risk metric is indicative of a risk level of the execution of the physical activity by the first worker)
generate a notification as a function of the productivity data (para [0265], a worker may be provided with a daily target for a specific metric, which can then be shown on the screen of the device. When the worker achieves the goal, the device may provide a notification to the worker or management. For example, a target for a productivity metric, such as number of lifts, or a goal for a safety metric, such as the number of lifts performed with good biomechanical posture may be created within the systems described. Workers may be alerted when they achieve daily goals or are ranked well relative to coworkers. Information displayed to individual workers would be catered to their individual needs based on their biomechanics data); and
display the notification using a display device (para [0344], the device display may present a number of metrics, such as numbers of safe and risky postures, safety performance against goals, steps, calorie estimation, and competitive data, such as rank in a competition, data by teams, and the like. The data may be shared with other workers by email, website, a companion app, social media, or the like. In some embodiments, custom content may be automatically shared with workers based on worker data, such as personalized training videos, safety or productivity improvement tips, praise, reward notifications, and the like. Such sharing may be on the wearable device display, by email, by SMS, or by internet).
1/ Elhawary does not disclose, however, Horseman discloses:
convert, using the at least a processor, the physiological data received by the at least one biometric sensor into a machine-readable output signal (para [0058], Measurements taken from the sensors are converted into electronic biometric data 200 for use by the training system 100. For example, in the arrangement of FIG. 2, measurements taken by the skin conductance sensor 202 are converted into electronic skin conductance data 200a, measurements taken by the blood glucose sensor 204 are converted into electronic blood glucose data 200b, measurements taken by the blood pressure sensor 206 are converted into electronic blood pressure data 200c, measurements taken by the facial recognition sensor 208 are converted into electronic facial recognition data 200d, measurements taken by the respiration sensor 210 are converted into electronic respiratory rate data 200e, measurements taken by the neural sensor 212 are converted into electronic neural data 200f (including, for example, data indicative of one or more brain signals such as alpha, beta, delta, gamma, etc.), and measurements taken by the heart rate sensor 214 are converted into electronic heart rate data 200g. Measurements taken by respective sensors 120 may be converted into electronic biometric data by the sensor itself, by the user computer 122).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify the Elhawary’s to incorporate the features taught by Horseman above, for the purpose of processing and analyzing data received from the biometric sensors. Since Elhawary discloses the wrist sensor 4040 may detect information about the worker, such as pulse rate, temperature, and hydration….The wrist sensor 4040 may then send the data to the wearable device 4010 for analysis (see para [0220]), Horsman discloses convert the physiological data received by the at least one biometric sensor into a machine-readable output signal as described above, therefore, one of ordinary skill in the art would have recognized that the combination of Elhawary and Horseman would have yield predictable results in processing and analyzing data received from the biometric sensors.
2/ Elhawary does not disclose, however, Alsahlawi discloses:
wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum (para [0125], the SST system 200, in connection with voice command or speech recognition software and the microphone 32 located on the wearable device 10, may enable workers to send verbal descriptions and updates of their tasks as they progress. For example, if a worker is required to leave the assigned zone in order to get new supplies, or in order to get a tool that is required for the assigned task and located in a different zone, the worker may simply use a voice command to send an update that is logged by the SST system 200. In one embodiment, the worker can simply say, “Send verbal message: Need tool from warehouse.” The wearable device 10 may then transmit an audio recording to the SST system 200 or network 56, where the audio recording could be run through speech recognition software, converted into text, and logged in the system in connection with the assigned task, the date, the time, and the unique identifier associated with wearable device 10 in question. Several benefits of enabling workers to send verbal updates that get logged in the SST system 200 include: (1) providing a quick and easy way to allow the worker to provide updates without requiring a laptop or typing text into one or more electronic devices).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify the Elhawary’s to incorporate the features taught by Alsahlawi above, for the purpose of providing a quick and easy way to obtain the information without requiring typing text into one or more electronic devices. Since Elhawary’s discloses generating the engagement element (para [0091], The wearable devices 190 may further incorporate user input means by which users can control the wearable device 190. For example, the device may include modules for detecting and interpreting voice or gesture based commands), Alsahlawi discloses generating the engagement element using at least an automatic speech recognition model to determine a content datum, as described above, therefore, one of ordinary skill in the art would have recognized that the combination of Elhawary and Alsahlawi would have yield predictable results in easy way to obtain data.
Regarding to claim 2, Elhawary discloses the apparatus of claim 1, wherein the plurality of sensors comprises a wearable device (para [0083], Each of the workers 110, 140, 172 would typically be wearing at least one sensor device, and in some embodiments, two sensor devices, 190a, b for recording movement. Typically, where two sensors are provided, the sensors used may be a wrist sensor device 190a, ideally located on the wrist or forearm of the dominant hand, and a back sensor device 190b, ideally located approximately at the height of the L1 and L2 vertebrae, but other sensor device types may be implemented as well. The wrist sensor may be incorporated into a wrist device, such as a bracelet or a wristwatch, and the back device may be incorporated into a chest strap, weight belt or back brace, for example. Where only one sensor device 190 is provided, it is typically applied to a worker 110, 140, 172 on or near the worker's hip).
Regarding to claim 3, Elhawary discloses the apparatus of claim 1, wherein the plurality of sensors comprises a camera (para [0376], Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player).
Regarding to claim 4, Elhawary discloses the apparatus of claim 1, wherein identifying the attention parameter comprises identifying the attention parameter using an automatic speech recognition model (para [0091], The wearable devices 190 may further incorporate user input means by which users can control the wearable device 190. For example, the device may include modules for detecting and interpreting voice or gesture based commands).
Regarding to claim 5, Elhawary discloses the apparatus of claim 1, wherein the attention parameter comprises an eye parameter (para [0222], As an example of PPE compliance, eye protection glasses can have a low power Bluetooth transmitter monitored by the wearable device 4010. If a worker forgets their eyewear and walks out of range of the transmitter contained therein, the wearable device 4010 can notify the worker).
Regarding to claim 6, Elhawary discloses the apparatus of claim 1, wherein the engagement element comprises a language parameter (para [0280], a worker may tap the wearable device 4010, resulting in a spike of acceleration data from the accelerometer 210, in order to indicate that the posture has been assumed. In some embodiments, such confirmation may be a voice command, a gesture, a physical switch, or a proximity sensor).
Regarding to claim 7, Elhawary discloses the apparatus of claim 1, wherein generating the engagement element comprises generating the engagement element using an engagement classifier (para [0199], Groups of workers with similar risk profiles and movement patterns can be addressed and trained by managers as a group. Such groups may be identified in the form of common movement patterns across several workers. For example, workers with more bends than twists will be grouped together to discuss techniques and strategies to mitigate those risky behaviors. Groups can be altered based on flexible time periods to adapt to the facility's needs.).
Regarding to claim 8, Elhawary discloses the apparatus of claim 7, wherein generating the engagement element using the engagement classifier comprises:
training the engagement classifier using confidence training data, wherein the engagement training data contains a plurality of data entries containing the attention parameter as inputs correlated to the engagement element as outputs (para [0084], A system implementing such a wearable device may be trained using a machine learning predictive model trained by collecting data from sensors attached to a user's spine and comparing that data to data collected at the user's hip. After training such a predictive model, the single hip mounted wearable device 190 may be used to evaluate movement of a worker's spine); and
generating the engagement element as a function of the attention parameter using the engagement classifier (para [0120], Several required variables may be detected or confirmed by way of machine learning algorithms. Similarly, the accuracy of lift detection may be improved by way of machine learning algorithms. Such algorithms may further be utilized to confirm the identification of the activity detected, both in terms of improving the detection of true positives and eliminating false positives).
Regarding to claim 9, Elhawary discloses the apparatus of claim 1, wherein the plurality of industrial stimuli comprises a personal interaction (para [0340], The productivity and safety data generated by the system and method described may be used to automate worker promotions and company reporting structure. Workers with higher productivity or who are more skillful or safer may be automatically promoted to tasks requiring higher skills or more responsibility).
Regarding to claim 10, Elhawary discloses the apparatus of claim 1, wherein the productivity data is reflected as a numerical score (para [0316], a cumulative risk metric may be calculated (9055) for each worker based on multiple physical activities, the cumulative risk metric being indicative of a risk level from the activities over time, and may be used to modify the risk score. Potential metrics for the cumulative risk metric are discussed above. In such an embodiment, the risk score for each worker of the group of workers may be modified based on the cumulative risk score for the corresponding worker).
Claims 11-20 are written in method and contain the same limitations found in claims 1-10 described above, therefore, are rejected by the same rationale.
Response to Arguments/Amendment
8. Applicant's arguments with respect to claims 1-20 have been fully considered but are not persuasive.
I. Claim Rejections - 35 USC § 101
Claims 1-20 are rejected under 35 U.S.C. 101 because the claim invention is directed to a judicial exception (i.e., law of nature, natural phenomenon, or abstract idea) without significantly more.
Step 2A, Prong One:
In response to the Applicant’s argument that the amended claims cannot be practically performed within the human mind, the Examiner respectfully disagrees and submits that the amended claims generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; and generating a notification as a function of the productivity data and the time period, as drafted, is a process that, under its broadest reasonable interpretation when read in light of the Specification, covers performance of the limitations in the mind, can be practically performed by human in their mind or with pen/paper, but for the recitation of generic computer components. That is, other than reciting “a computer/processor/automatically”, nothing in the claim elements preclude the steps from practically being performed in the mind. The mere nominal recitation of generic computing devices does not take the claim limitation out of the Mental Processes grouping of abstract ideas. Thus, if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgment, opinion). See MPEP 2106.04(a)(2), subsection III.
Moreover, the claim recites “a function of the industrial data”, “a function of the attention parameter”, “a function of the engagement element”, and “a function of the productivity data”, which are directed to mathematical relationships, falls within “Mathematical Concepts” grouping of abstract ideas (mathematical relationships, mathematical formulas or equations, mathematical calculations). See MPEP 2106.04(a)(2), subsection III.
Therefore, the claim recites an abstract idea. (Step 2A, Prong One: YES).
The claims recite the limitations “a processor”, “a memory”, “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, “display the notification using a display device”, and “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum”, which are additional elements, and are analyzing under Step 2A, Prong Two.
Step 2A, Prong Two:
The claim recites the additional elements of “a processor”, “a memory”, “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, “display the notification using a display device”, and “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum.”
The additional elements “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, and “display the notification using a display device” are mere data gathering and outputting recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and outputting, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering, transmitting and outputting. See MPEP 2106.05. Moreover, these additional elements do not provide any improvement to the technology, improvement to the functioning of the computer, improving the sensors/ the display device they are just merely used as general means for collecting and outputting data.
The additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
The additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” are used to generally apply the abstract idea without placing any limits on how the machine learning model functions. Rather, these limitations only recite the outcome of “generating the engagement element to determine a content datum” and do not include any details about how the solution is accomplished. See MPEP 2106.05(f).
The additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” also merely indicate a field of use or technological environment in which the judicial exception is performed. Although the additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” limit the identified judicial exceptions “generating the engagement element to determine a content datum”, this type of limitations merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Further, the steps of “generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; generating a notification as a function of the productivity data and the time period; convert the physiological data received by the at least on biometric sensor into a machine-readable output signal and display the notification using a display device”, are recited as being performed by the processor. The processor is recited at a high level of generality. In the limitations “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal and display the notification using a display device”, the processor is using as a tool to perform the function of gathering and outputting data. In the limitations “generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; generating a notification as a function of the productivity data and the time period”, the processor is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). The additional elements recite generic computer components the processor, the memory, and software programming instructions that are recited a high-level of generality that merely perform, conduct, carry out, implement, and/or narrow the abstract idea itself. Accordingly, the additional elements evaluated individually and in combination do not integrate the abstract idea into a practical application because they comprise or include limitations that are not indicative of integration into a practical application such as adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea -- See MPEP 2106.05(f).
These additional elements do not provide any improvements to the technology, improvements to the functioning of the computer, the processor, the memory, improvements to the plurality of sensors, biometric sensor, improvements to machine learning, or other technology. They do not recite a particular machine or manufacture that is integral to the claims, and do not transform or reduce a particular article to a different state or thing.
In response to the Applicant’s arguments regarding Example 47, Claim 3, and Example 48, Claim 2, the Examiner submits that in Example 47, Claim 3, the eligible claim recites a method using an artificial neural network to detect malicious network packets. The claim integrates a judicial exception into a practical application by combining steps that result in improved network security. Specifically, the claim includes training the network using defined algorithms, detecting anomalous traffic patterns, identifying malicious sources, and automatically taking remedial actions such as blocking network traffic; Example 48, Claim 2, the eligible claim involves separating speech signals using a deep neural network. Mathematical operations are integrated into a practical application by generating modified audio output that improves speech quality and separation. In contrast, the Applicant’s claims recite the additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” are used to generally apply the abstract idea without placing any limits on how the machine learning model functions. Rather, these limitations only recite the outcome of “generating the engagement element to determine a content datum” and do not include any details about how the solution is accomplished. See MPEP 2106.05(f). These additional elements also merely indicate a field of use or technological environment in which the judicial exception is performed. Although the additional elements “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” limit the identified judicial exceptions “generating the engagement element to determine a content datum”, this type of limitations merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Moreover, these additional elements do not provide any improvements to the technology, improvements to the functioning of the computer, improvements to machine learning, or other technology.
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application.
Step 2B:
As explained with respect to Step 2A, Prong Two, the additional elements of “wherein the engagement element is generated using at least an automatic speech recognition model to determine a content datum” are at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f).
The additional elements “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, and “display the notification using a display device” were found to be insignificant extra-solution activity in Step 2A, Prong Two, because they were determined to be insignificant limitations as necessary data gathering and outputting. However, a conclusion that an additional element is insignificant extra solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g).
As discussed in Step 2A, Prong Two above, the additional elements of “a plurality of sensors comprises at least one biometric sensor”, “convert the physiological data received by the at least on biometric sensor into a machine-readable output signal”, and “display the notification using a display device” are recited at a high level of generality. These elements amount to gathering and displaying data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely genetic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
As discussed in Step 2A, Prong Two above, the recitation of the processor to perform limitations “generating a user profile from a user…; identifying an attention parameter as a function of the industrial data and a plurality of industrial stimuli…; generating an engagement elements as a function of the attention parameter…; determining productivity data as a function of the generated engagement element; generating a notification as a function of the productivity data and the time period, and display the notification using a display device”, amounts to no more than mere instructions to apply the exception using a generic computer component.
Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. Therefore, the claim is not patent eligible.
Accordingly, the 101 rejection is maintained.
II. Claim Rejections - 35 USC § 102
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. New ground of 103 rejection described above.
In response to the Applicant’s argument that Elhawary does not disclose “the user profile is generate using a smart assessment”, the Examiner respectfully disagrees and submits that Elhawary defined “a smart assessment” at least in Specification para [0018] that a user profile 108 may be generated using a smart assessment. As used in this disclosure, a “smart assessment” is a set of questions that asks for user’s information as described in this disclosure. In some cases, questions within smart assessment may include selecting a selection from a plurality of selections as answers. In other cases, questions within smart assessment may include free user input as answers. Elhawary discloses in para [0298], the wearable devices 190 each have RFID readers to read the worker's badge or entry card and associate themselves to that worker. This may be done on a daily basis, such that each day a worker may pick up one of several available wearable devices 190 and associate that device with their own profile. Alternatively, the association may be manually created by a worker entering their name, employee ID or other unique identifying feature at a worker interface, so the device can associate the data to that specific worker. Thus, Elhawary’s method allows a worker to enter his/her name, employee ID or other unique identifying feature at a worker interface (free user input as answers). Therefore, Elhawary meets the claimed limitations “the user profile is generated using a smart assessment.
Conclusion
9. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
10. Claims 1-20 are rejected.
11. The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure:
Horsman et al. (US 9,844,344) disclose computer-implemented methods for monitoring the health of an employee while the employee is located in an employee workstation.
Little et al. (US 2009/0135009) disclose systems and methods for providing a sensor enhance employee safety evaluation system.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner NGA B NGUYEN whose telephone number is (571) 272-6796. The examiner can normally be reached on Monday-Friday 7AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached on (571) 272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NGA B NGUYEN/Primary Examiner, Art Unit 3625 March 19, 2026