Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendments
This action is in response to amendments filed September 10th, 2024, in which Claims 1 and 11 are amended. No claims are cancelled nor added. The amendments have been entered. Claims 1, 3-6, 10, 11, 13-16, and 20 are currently pending.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3-6, 10, 11, 13-16, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation the user objectives generated using the first machine learning process (on page 1, last line). There is a lack of proper antecedent basis in the claims for this limitation, because no user objectives generated using the first machine learning process were ever recited; the user objects are received and the first machine learning process is recited as identifying the first rank-ordered instruction set comprising a plurality of instructions, instead.
Claim 11 recites the limitation the user objectives generated using the first machine learning process (on page 5, line 7). There is a lack of proper antecedent basis in the claims for this limitation, because no user objectives generated using the first machine learning process were ever recited; the user objects are received and the first machine learning process is recited as identifying the first rank-ordered instruction set comprising a plurality of instructions, instead.
Claim 11 (on page 6, in the second to last limitation beginning with adjusting) refers to a second ranking process, but a second ranking process was previously recited in reference to generating the first rank-ordered instruction set (page 5, line 11) and so the claim scope is unclear as to whether this requires two second ranking processes or as to whether the same second ranking process is to be used to accomplish the two tasks. Not that Claim 1 recites the second ranking process in this latter section while Claim 11 recites a second ranking process.
Dependent claims are rejected for inheriting the indefiniteness of a parent claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-6, 10, 11, 13-16, and 20 are rejected under 35 U.S.C. 101 because they are directed towards an abstract idea, without significantly more.
Claim 1 recites the limitations of generating a rank-ordered instruction set; determining, using a ranking process and the plurality of user objectives, a first rank-ordered goal set, wherein the first ranking process is performed as a function of a first objective function; correlating categorized physiological data with goals of rank ordered goal sets to create a first training set; identifying, using a … process and the first rank-ordered goal set, a first rank-ordered instruction set including a plurality of instructions as a function of an input comprising the user physiological data, wherein the plurality of instructions includes an instruction for addressing each objective of the plurality of user objectives; measuring … a magnitude of an effect of the instruction on each objective of the plurality of user objectives; modifying the instruction as a function of the effect of the instruction on each objective of the plurality of user objectives generated …; reconciling, using the first … process, opposing instructions in addressing at least an objective; generating, using a second ranking process and the plurality of instructions, the first ranked-ordered for addressing the rank-ordered goal set, wherein the second ranking process is performed as a function of a second objective function; and to generate a second rank-ordered instruction set wherein the generating comprising: classifying, using a classification process, the plurality of user data into categories pertaining to the plurality of instructions in the first rank ordered instruction set generated …; calculating … the effect of at least a user action on the plurality of user objectives … as a function of an input comprising the user action; adjusting , using the second ranking process, a frequency of the instruction based on the categorized user data; and generate the second rank-ordered instruction set using a third ranking process, wherein generating the second rank-ordered instruction set further comprises generating, using a third … process and a second rank-ordered goal set, a second instruction set, each of which is a function that can be performed within the human mind (measuring an effect is interpreted, in light of the context, as determining an effect; reconciling opposing instructions in interpreted, in light of the context; as determining a final set between two different sets) and thus fall into the mental process grouping of abstract ideas. Thus, Claim 1 recites an abstract idea.
Claim 1 does not recite any additional elements which could integrate the abstract idea into a practical application, because the additional elements recited consist of a wearable device configured to log a user action; receiving a plurality of user objectives; to provide the rank-ordered instruction set to a user device; to receive, from the user device, a plurality of user data, and to iteratively output an updated plurality of user objectives (each of which is insignificant extra-solution activity of data gathering or transmitting information for display, by MPEP 2106.05(g), and thus by MPEP 2106.04(d)(I), do not integrate the abstract idea into a practical application); of instructions to implement the abstract idea on a computer (a computing device configured to; and training and using machine learning processes is merely the implementation of a the mental process of learning using a computer or other machinery as a tool) which cannot integrate a judicial exception into a practical application; and of using a first machine learning process that has been trained as a function of a first training set to perform the identifying and of using a second machine learning process for calculating the effect and a third machine learning process to generate a second rank-ordered instruction set – which consist of merely using a computer or other machinery (in this case, machine learning) to implement the abstract idea, which by MPEP 2106.05(f) cannot integrate the abstract idea into a practical application. Thus, the claim is directed to an abstract idea.
Claim 1 does not amount to significantly more than an abstract idea, because the recited additional elements do not provide an inventive concept. Specifically, instructions to implement an abstract idea on a computer or other machinery (e.g. by a machine learning model)do not qualify as significantly more (MPEP 2106.05(I)(A)(i) and 2106.05(f)), nor does adding insignificant extra-solution activity (MPEP 2106.05(I)(A)(ii), where the data gathering and transmitting information are well-understood, routine, and conventional, as described in MPEP 2106.05(d), “transmitting data over a network”) and a wearable sensor to collect physiological data of various types is evidenced as routine, conventional, and well-understood by Lau ([0021] and [0025] treat the sensors and the sensed data as just one of a list of possible known devices that do not need explanation, indicating that none of them are notable or could be inventive). Taken both separately and in combination (i.e. there is no nexus between the receiving and outputting of data or the computer implementation that provides an inventive concept), the additional elements do not add significantly more to the abstract idea. Therefore, Claim 1 is subject matter ineligible.
Dependent Claims 3-6, 10 are also directed to an abstract idea, without significantly more. Specifically, Claim 3 recites only an additional step in the mental process (to determine relative importance of each objective). Claim 4 recites an additional step in the mental process (measuring, using the machine-learning process, an effect of a solution on an objective) with instructions to implement the step on a computer; and additional extra-solution activity of data-gathering (retrieving from a database at least an instruction corresponding to an objective) which can neither integrate the abstract idea into a practical application nor provide significantly more than the abstract idea. Claim 5 recites only additional steps of a mental process (weighing each instruction of the first set of instructions; determining a suitable timing; generating the first rank-ordered list as a function of the instructions and the timing). Claim 6 only requires that the plurality of user data is received later than the first rank-ordered instruction set is generated; adding an order to the additional element does not in this case represent an inventive concept and the additional element remains insignificant extra-solution activity of receiving data required for the implementation of the abstract idea. Claim 10 only recites additional steps in a mental process (weighing each instruction of the second set of instructions; determining a suitable timing; generating the second rank-ordered list as a function of the instructions and the timing). Therefore, none of the dependent claims include any additional elements which integrate the abstract idea into a practical application, nor provide significantly more than the abstract idea itself. Claims 3-6 and 10 are thus subject matter ineligible.
Claims 11 and 13-16 and 20 recite the method performed by the system of Claims 1 and 3-6 and 10, respectively, and thus are rejected as directed towards an abstract idea, without significantly more, for reasons set forth in the rejections of Claims 1 and 3-6 and 10, respectively.
Response to Arguments
Applicant’s arguments filed August 14th, 2025 have been fully considered, but are not fully persuasive.
Applicant’s amendments have overcome some of the 35 U.S.C. 112(b) rejections of the previous office action, but failed to address the third indefiniteness issue raised in the previous office action (pg. 3 of the non-final office action, final paragraph). Applicant’s amendments have further introduced an additional indefiniteness issue, as noted in the current rejection.
Applicant’s arguments regarding the 35 U.S.C. 101 rejections of the claims, as directed towards an abstract idea without significantly more, have been fully considered, but are not persuasive.
Applicant asserts, on pg. 4 of the response regarding Step 2A Prong 1, that “the amended claims do not recite a mental process.” Applicant argues (pg. 5 of the response, 2nd paragraph) that the mental process steps identified by the examiner cannot be performed in the human mind “because it requires the generation and iterative refinement of a rank-ordered instruction set using specific machine learning processes operating on complex, categorized user data within a computing environment.” However, applicant’s statement merely describes using a computer or other machinery (i.e. numbered machine learning processes) to perform a mental process (generation and refinement of a ranked set of instructions). MPEP 2106.04(a)(III)(C) clearly states “A Claim That Requires a Computer May Still Recite a Mental Process” – that performing a mental process on a generic computer does not mean the mental process is not a mental process, and that using a computer as a tool to perform a mental process does not make the claim eligible. Applicant continues to describe steps which use trained machine learning models to perform mental process steps. Merely training a machine learning model to perform a mental process function (a process analogous to programming a computer to perform a mental process function) still falls within the scope of MPEP 2106.05(f)(2), using a computer or other machinery as a tool to perform a judicial exception. Note that the rejection does not assert that the use of the models or the training of the models is a mental process, only that the models are used to perform mental processes.
Note that the claim does still recite several limitations that can be fairly read as mental processes (note by MPEP 2106.04(III)(C), “A claim that requires a computer may still recite a mental process” & MPEP 2106.05(f)(2) “Whether the claim invokes computers or other machinery merely as a tool to perform an existing process” using a computer and using a neural network to perform the recited mental process steps is irrelevant in identifying the steps as a mental process) and have been identified as such in the rejection. Therefore, by Step 2A, Prong 1 of the subject matter eligibility guidance of MPEP 2106, the claim recites an abstract idea.
Regarding Step 2A, Prong 2, applicant refers to Example 47, Claim 3, of the Subject Matter Eligibility examples. Regarding this claim, the example states “the claim reflects the improvement in step (d), dropping potentially malicious packets in step (e), and locking future traffic from the source address in step (f).” As applicant notes, “These actions are tied to a real-time proactive system for addressing network intrusions.” However, in contrast, the claims of the instant application do not recite a similar technological improvement – the outcome of the claims is a set of instructions (for a human to achieve their human health objective), not any technical solution to a technical problem (preventing network intrusions).
Applicant next refers to Example 48, Claim 2, but again, higher quality performance in a speech synthesis system is a technical improvement to a technical problem; while creating plans for humans to follow to achieve their health goals is not.
Regarding Step 2B, applicant asserts that the combination of machine learning processes provides an inventive concept. However, the claims do not recite “interlinked” machine learning processes, as claimed by the applicant, that may become more than the sum of their parts, or in combination provide significantly more than the abstract idea itself. Instead, each different machine learning process merely performs a mental process – there is no nexus provided by using three generic machine learning processes, that independently each perform the recited abstract idea steps. Merely reciting multiple machine learning processes to do two multiple independent mental process tasks is not inventive.
Conclusion
As noted in the previous office action, the combination of limitations of the independent claims have been searched but not uncovered in the prior art.
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN M SMITH whose telephone number is (469)295-9104. The examiner can normally be reached on Monday - Friday, 8:00am - 4pm Pacific.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIAN M SMITH/Primary Examiner, Art Unit 2122