Prosecution Insights
Last updated: April 19, 2026
Application No. 18/205,472

ESTIMATING GAIT EVENT TIMES & GROUND CONTACT TIME AT WRIST

Final Rejection §101§102§103
Filed
Jun 02, 2023
Examiner
MONTGOMERY, MELISSA JO
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Apple Inc.
OA Round
2 (Final)
10%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
35%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
1 granted / 10 resolved
-60.0% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
53 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
26.9%
-13.1% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments filed 02 FEBRUARY 2026 have been entered. Claims 1 – 20 are pending. Applicant’s amendments to the drawings have overcome each and every objection to the drawings previously applied in the office action dated 08 SEPTEMBER 2025. Applicant’s amendments have not overcome each and every objection to the claims previously applied in the office action dated 08 SEPTEMBER 2025. Applicant’s arguments and amendments to the claims have overcome each and every rejection to the claims under 35 U.S.C. 112 previously applied in the office action dated 08 SEPTEMBER 2025. Drawings The drawings were received on 02 FEBRUARY 2026 These drawings are accepted. Specification The disclosure is objected to because of the following informalities: Based on the Applicant’s Arguments submitted 02 FEBRUARY 2026, it is argued that the contents of Figure 2 are “novel and nonobvious observations made by the inventors”. There remains an inconsistency with Applicant’s specification at [0030] that asserts that “Figs. 1 and 2 were adapted from Uchida…” In the case that Figure 2 was created by the inventors, as described in Applicant’s arguments, then the specification should be amended to reflect that Figure 2 is not from Uchida. Appropriate correction is required. Claim Objections Claims 4 and 18 are objected to because of the following informalities: the term “includes initial contact event time”. It is suggested to include an “an” before “initial” for readability, resulting in “includes an initial contact event time”. Appropriate correction is required. Claims 5 and 19 are objected to because of the following informalities: the term “includes toe-off event time”. It is suggested to include an “a” before “toe-off” for readability, resulting in “includes a toe-off event time”. Appropriate correction is required. Claims 6 and 20 are objected to because of the following informalities: the term “includes ground contact time (GCT)”. It is suggested to include an “a” before “ground” for readability, resulting in “includes a ground contact time (GCT)”. Appropriate correction is required. Claim 13 is objected to because of the following informalities: the term “predict the GCT” is suggested to be revised to be “predict GCT” for readability. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 - 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Regarding Claim 1, the claim recites "an act or step, or series of acts or steps" and is therefore a process, which is a statutory category of invention (Step 1). The claim is then analyzed to determine whether it is directed to any judicial exception (Step 2A, Prong 1). Regarding Claim 15, the claim recites an apparatus, which is one of the statutory categories of invention (Step 1). The claim is then analyzed to determine whether it is directed to any judicial exception (Step 2A, Prong 1). Each of Claims 1 - 20 has been analyzed to determine whether it is directed to any judicial exceptions. Step 2A, Prong 1 Each of Claims 1 - 20 recites at least one step or instruction for observations, evaluations, judgments, and opinions, which are grouped as a mental process under the 2019 PEG. The claimed invention involves making observations, evaluations, judgments, and opinions, which are concepts performed in the human mind under the 2019 PEG. Accordingly, each of Claims 1 - 20 recites an abstract idea. Specifically, Independent Claims 1 and 15 recite (underlined are observations, judgements, evaluations, or opinions, which are grouped as a mental process under the 2019 PEG) (additional elements bolded, see Step 2A, prong 2); Claim A method comprising: obtaining, with at least one processor of a wrist-worn device, sensor data indicative of acceleration and rotation rate; and predicting, with the at least one processor, at least one gait event time based on a machine learning (ML) model with the acceleration and rotation rate as input to the ML model. Claim 15 A system comprising: at least one processor; memory storing instructions that when executed by the at least one processor, cause the at least one processor to perform operations comprising: obtaining data indicative of acceleration and rotation rate; and predicting at least one gait event time based on a machine learning (ML) model with the acceleration and rotation rate as input to the ML model. (observation, judgment or evaluation, which is grouped as a mental process under the 2019 PEG); These underlined limitations describe a mathematical calculation and/or a mental process, as a skilled practitioner is capable of performing the recited limitations and making a mental assessment thereafter. Examiner notes that nothing from the claims suggests that the limitations cannot be practically performed by a human with the aid of a pen and paper, or by using a generic computer as a tool to perform mathematical calculations and/or mental process steps in real time. Examiner additionally notes that nothing from the claims suggests and undue level of complexity that the mathematical calculations and/or the mental process steps cannot be practically performed by a human with the aid of a pen and paper, or using a generic computer as a tool to perform mathematical calculations and/or mental process steps. For example, in Independent Claims 1 and 15, these limitations include: evaluating at least one gait event time based on a machine learning (ML) model with the acceleration and rotation rate as input to the ML model. Similarly, Dependent Claims 2 – 14 and 16 - 20 include the following abstract limitations, in addition the aforementioned limitations in Independent Claims 1 and 15 (underlined observation, judgment or evaluation, which is grouped as a mental process under the 2019 PEG): combining multiple predictions of ground contact time (GCT) per running step. Observation and judgment to evaluate multiple predictions of ground contact time (GCT) per running step. averaging the multiple GCT predictions over time. Evaluate by averaging the multiple GCT predictions over time. determining the GCT balance from the predicted GCT. Observation and judgment to evaluate the GCT balance from the predicted GCT. determining the GCT as right foot GCT or left foot GCT. Observation and judgment to evaluate the GCT as right foot GCT or left foot GCT. determining the GCT balance from the determined left foot GCT or the determined right foot GCT. Observation and judgment to evaluate the GCT balance from the determined left foot GCT or the determined right foot GCT. prior to predicting, converting the sensor data from a sensor reference coordinate frame to an inertial reference frame prior to predicting, evaluate to convert the sensor data from a sensor reference coordinate frame to an inertial reference frame to predict the GCT. Observation and judgment to evaluate to predict the GCT. all of which are grouped as mental processes or mathematical algorithms under the 2019 PEG. Accordingly, as indicated above, each of the above-identified claims recite an abstract idea. Step 2A, Prong 2 The above-identified abstract ideas in each of Independent Claims 1 and 15 (and their respective Dependent Claims) are not integrated into a practical application under 2019 PEG because the additional elements (identified in Claims 1 - 20), either alone or in combination, generally link the use of the above-identified abstract ideas to a particular technological environment or field of use. More specifically, the additional elements of: “processor”, “at least one processor” “wrist-worn device” “memory” “machine learning model” “neural network” “long short-term memory (LSTM) neural network” Additional elements recited include a “processor”, “at least one processor”, “wrist-worn device”, “memory”, “machine learning model”, “neural network”, and “long short-term memory (LSTM) neural network” in Independent Claims 1 and 15 (and their respective Dependent Claims). These components are recited at a high level of generality, , i.e., as a processor performing a generic function of processing data (the obtaining, predicting, combining, and determining); and a memory performing a generic function of storing data (the storing). These generic hardware component limitations for “processor”, “at least one processor”, “wrist-worn device”, “memory”, “machine learning model”, “neural network”, and “long short-term memory (LSTM) neural network” are no more than mere instructions to apply the exception using generic computer and hardware components. As such, these additional elements do not impose any meaningful limits on practicing the abstract idea. Further additional elements from Independent Claims 1 and 15 includes pre-solution activity limitations, such as: obtaining, with at least one processor of a wrist-worn device, sensor data indicative of acceleration and rotation rate; at least one processor; memory storing instructions that when executed by the at least one processor, cause the at least one processor to perform operations comprising: obtaining data indicative of acceleration and rotation rate; In addition the aforementioned extra-solution activity limitations in Independent Claim 25, additional extra-solution activity limitations recited in Dependent Claims 2 – 14 and 16 - 20 include: wherein the at least one gait event time includes initial contact event time. wherein the at least one gait event time includes toe-off event time. wherein the at least one gait event time includes ground contact time (GCT). wherein the machine learning model is a neural network. wherein the neural network is a long short-term memory (LSTM) neural network. wherein the neural network includes a single LSTM with three outputs that uses internal representations learned for gait events wherein the LSTM neural network includes an LSTM layer, encoding layers, a number of full connected layers or dense layer and an output layer. These pre-solution measurement elements are insignificant extra-solution activity, setting up the parameters of the system, and serve as data-gathering for the subsequent steps. The “processor”, “at least one processor”, “wrist-worn device”, “memory”, “machine learning model”, “neural network”, and “long short-term memory (LSTM) neural network” as recited in Independent Claims 1 and 15 (and their respective Dependent Claims) are generically recited computer and hardware elements which do not improve the functioning of a computer, or any other technology or technical field. Nor do these above-identified additional elements serve to apply the above-identified abstract idea with, or by use of, a particular machine, effect a transformation or apply or use the above-identified abstract idea in some other meaningful way beyond generally linking the use thereof to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Furthermore, the above-identified additional elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. For at least these reasons, the abstract ideas identified above in independent Claims 1 and 15 (and their dependent claims) is not integrated into a practical application under 2019 PEG. Moreover, the above-identified abstract idea is not integrated into a practical application under 2019 PEG because the claimed method and system merely implements the above-identified abstract idea (e.g., mental process and certain method of organizing human activity) using rules (e.g., computer instructions) executed by a computer processor as claimed. In other words, these claims are merely directed to an abstract idea with additional generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. Additionally, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. That is, like Affinity Labs of Tex. v. DirecTV, LLC, the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. Thus, for these additional reasons, the abstract idea identified above in independent Claims 1 and 15 (and their dependent claims) is not integrated into a practical application under the 2019 PEG. Accordingly, independent Claims 1 and 15 (and their dependent claims) are each directed to an abstract idea under 2019 PEG. Step 2B – None of Claims 1 - 20 include additional elements that are sufficient to amount to significantly more than the abstract idea for at least the following reasons. These claims require the additional elements of: “processor”, “at least one processor”, “wrist-worn device”, “memory”, “machine learning model”, “neural network”, and “long short-term memory (LSTM) neural network” as recited in independent Claims 1 and 15 (and their dependent claims). The additional elements of the “processor”, “at least one processor”, “wrist-worn device”, “memory”, “machine learning model”, “neural network”, and “long short-term memory (LSTM) neural network” in independent Claims 1 and 15 (and their dependent claims), as discussed with respect to Step 2A Prong Two, amounts to no more than mere instructions to apply the exception using generic computer and hardware components. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The above-identified additional elements are generically claimed computer components which enable the above-identified abstract idea(s) to be conducted by performing the basic functions of automating mental tasks. The courts have recognized such computer functions as well understood, routine, and conventional functions when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See, Versata Dev. Group, Inc. v. SAP Am., Inc. , 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); and OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93. Per Applicant’s specification, the “processor” and “at least one processor” is described generically at [0044] as “one or more hardware data processors”, image processors and/or processors 604”. The “processor” and/or “at least one processor” is shown as generic box element “processor(s) 604” in Figure 6. Per Applicant’s specification, the “wrist-worn device” is described generically at [0019] as “single wrist worn device (e.g. smart watch)”. It is shown as generic box element “watch accel and gyro” in Fig 4. Per Applicant’s specification, the “memory” is described generically in [0044] as “memory interface 602” and [0052] “memory interface 602 can be couple to memory 650” with “memory 650” given as exemplar “high-speed random access memory and/or non-volatile memory”, “optical storage devices”. The “memory” is shown as “memory 650” in Figure 6. Per Applicant’s specification, the “machine learning model” is described as a “neural network” [0015] and a “long short-term memory (LSTM) neural network” in [0016], [0017], [0035] – [0038], [0041], [0042], and [0044] – [0045] with layers. It is shown as a LSTM in Figure 3, with inputs and outputs shown and as block 502 in Figure 5, “predicting at least one gait evet time based on a machine learning (ML) model…” Accordingly, in light of Applicant’s specification, the claimed terms “processor”, “at least one processor”, “wrist-worn device”, “memory”, “machine learning model”, “neural network”, and “long short-term memory (LSTM) neural network” are reasonably construed as a generic computing and hardware devices. Like SAP America vs Investpic, LLC (Federal Circuit 2018), it is clear, from the claims themselves and the specification, that these limitations require no improved computer resources, just already available computers, with their already available basic functions, to use as tools in executing the claimed process. Furthermore, Applicant’s specification does not describe any special programming or algorithms required for the “processor”, “at least one processor”, “wrist-worn device”, “memory”, “machine learning model”, “neural network”, and “long short-term memory (LSTM) neural network”. This lack of disclosure is acceptable under 35 U.S.C. §112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the computer arts. By omitting any specialized programming or algorithms, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the computer industry or arts. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional elements because it describes these additional elements in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a) (see Berkheimer memo from April 19, 2018, (III)(A)(1) on page 3). Adding hardware that performs “‘well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible (TLI Communications). The recitation of the above-identified additional limitations in independent Claims 1 and 15 (and their dependent claims) amounts to mere instructions to implement the abstract idea on a computer. Simply using a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); and TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Moreover, implementing an abstract idea on a generic computer, does not add significantly more, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer. A claim that purports to improve computer capabilities or to improve an existing technology may provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); and Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). However, a technical explanation as to how to implement the invention should be present in the specification for any assertion that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Here, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. Instead, as in Affinity Labs of Tex. v. DirecTV, LLC 838 F.3d 1253, 1263-64, 120 USPQ2d 1201, 1207-08 (Fed. Cir. 2016), the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. For at least the above reasons, the apparatus and method of Claims 1 - 20 are directed to applying an abstract idea as identified above on a general-purpose computer without (i) improving the performance of the computer itself, or (ii) providing a technical solution to a problem in a technical field. None of Claims 1 - 20 provides meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that these claims amount to significantly more than the abstract idea itself. Taking the additional elements individually and in combination, the additional elements do not provide significantly more. Specifically, when viewed individually, the above-identified additional elements for Step 2A Prong 2 in independent Claims 1 and 15 (and their dependent claims) do not add significantly more because they are simply an attempt to limit the abstract idea to a particular technological environment. That is, neither the general computer elements nor any other additional element adds meaningful limitations to the abstract idea because these additional elements represent insignificant extra-solution activity. When viewed as a combination, these above-identified additional elements simply instruct the practitioner to implement the claimed functions with well-understood, routine and conventional activity specified at a high level of generality in a particular technological environment. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. When viewed as whole, the above-identified additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Thus, Claims 1 - 20 merely apply an abstract idea to a computer and do not (i) improve the performance of the computer itself (as in Bascom and Enfish), or (ii) provide a technical solution to a problem in a technical field (as in DDR). Therefore, none of the Claims 1 - 20 amounts to significantly more than the abstract idea itself. Accordingly, Claims 1 - 20 are not patent eligible and rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 4 – 11, 15, 18 – 20 are rejected under 35 U.S.C. 102(a)(1) as being by Chang, et. al. (United States Patent Application Publication US 2017/0188894 A1) Regarding Claim 1, Chang discloses A method ([Abstract]) comprising: obtaining, with at least one processor ([0031] “activity measurement device 110…include…a processor”) of a wrist-worn device ([0031] “activity monitoring device 110…mounted to a participant…integrated into a wearable such as a...bracelet, a watch…”), sensor data indicative of acceleration ([0032] “inertial measurement system 112…measure multiple kinematic properties of an activity…accelerometer”; [0055] “kinematic measurements can include acceleration”) and rotation rate ([0055] “…angular velocity”); and predicting, with the at least one processor ([0031] “processor”), at least one gait event time ([0121] “predicting fatigue state based on current biomechanical signals. The predicted fatigue state may be used in providing pacing and distance targets before or during an activity session.”; [All of 113] including “participant will have general patterns in ground contact time. Ground contact time will generally increase during a run as the runner begins to fatigue. The rate of change, the amount of change, and/or the values encountered during a run may be used in detecting a fatigue condition of a fatigue model.”; [0117] “Detection of a change in biomechanical signals, such as ground contact time, sagittal tilt, cadence or motion paths, can additionally or alternatively use various machine learning techniques.”, “comfort motion path…”; [0059])(Examiner notes that the time predicted is the ground reaction time, and how it would change over time during an activity session with the presence of fatigue) based on a machine learning (ML) model ([0059] “Detecting an action pattern can additionally or alternatively use machine intelligence such as deep learning, machine learning...”; [0117] “Detection of a change in biomechanical signals, such as ground contact time…can…use various machine learning techniques.”; [0109]) with the acceleration and rotation rate as input to the ML model ([0059] “…identifying the action comprises identifying and selecting a window associated with an action…monitoring the kinematic data…and detecting an action pattern”; [0055] “kinematic measurements can include acceleration…angular velocity”). Regarding Claim 15, Chang discloses A system ([Abstract]) comprising: at least one processor ([0031]) “processor”); memory storing instructions ([0129] “computer-readable medium storing computer-readable instructions…flash memory…”) that when executed by the at least one processor ([0129] “instructions can be executed by computer-executable components”, “computer-executable component can be a processor”), cause the at least one processor to perform operations ([0129] “processor…execute the instructions”) comprising: obtaining data indicative of acceleration ([0032] “inertial measurement system 112…measure multiple kinematic properties of an activity…accelerometer”; [0055] “kinematic measurements can include acceleration”) and rotation rate ([0055] “…angular velocity”); and predicting at least one gait event time ([0121] “predicting fatigue state based on current biomechanical signals. The predicted fatigue state may be used in providing pacing and distance targets before or during an activity session.”; [All of 113] including “participant will have general patterns in ground contact time. Ground contact time will generally increase during a run as the runner begins to fatigue. The rate of change, the amount of change, and/or the values encountered during a run may be used in detecting a fatigue condition of a fatigue model.”; [0117] “Detection of a change in biomechanical signals, such as ground contact time, sagittal tilt, cadence or motion paths, can additionally or alternatively use various machine learning techniques.”, “comfort motion path…”; [0059])(Examiner notes that the time predicted is the ground reaction time, and how it would change over time during an activity session with the presence of fatigue) based on a machine learning (ML) model ([0059] “Detecting an action pattern can additionally or alternatively use machine intelligence such as deep learning, machine learning...”; [0117] “Detection of a change in biomechanical signals, such as ground contact time…can…use various machine learning techniques.”) with the acceleration and rotation rate as input to the ML model ([0059] “…identifying the action comprises identifying and selecting a window associated with an action…monitoring the kinematic data…and detecting an action pattern”; [0055] “kinematic measurements can include acceleration…angular velocity”). Regarding Claims 4 and 18, Chang discloses as described above, The method of claim 1 and The system of claim 15, respectively. For the remainder of Claims 4 and 15, Chang discloses wherein the at least one gait event time includes initial contact event time ([0076] “event corresponding to when the foot makes initial contact (e.g., heel strike or initial contact) with the ground.”) Regarding Claims 5 and 19, Chang discloses as described above, The method of claim 1 and The system of claim 15, respectively. For the remainder of Claims 5 and 19, Chang discloses wherein the at least one gait event time includes toe-off event time ([0076] “event corresponding to the time of when the foot leaves the ground (i.e., "toe-off")”; [0113] “values encountered during a run…detecting a fatigue condition of a fatigue model”; [0109]) Regarding Claims 6 and 20, Chang discloses as described above, The method of claim 1 and The system of claim 15, respectively. For the remainder of Claims 6 and 20, Chang discloses wherein the at least one gait event time includes ground contact time (GCT) ([0062] “a set of running biomechanical signals can include motion paths, ground contact time”). Regarding Claim 7, Chang discloses as described above, The method of claim 6. For the remainder of Claim 7, Chang discloses further comprising: determining GCT balance from the predicted GCT ([0125] “the ground contact time for the right foot may gradually grow more than the ground contact time of the left foot indicating limping.”; [0024] “Quantitatively, the "composure" or balance of the running motion is lost when fatigue sets in for a participant.”)(Examiner notes that a “limping” walk has an unbalanced cadence that is not even in duration between the two feet.”) Regarding Claim 8, Chang discloses as described above, The method of claim 7. For the remainder of Claim 8, Chang discloses further comprising determining the GCT as right foot GCT ([0125] “ground contact time for the right foot”) or left foot GCT ([0125] “ground contact time of the left foot”). Regarding Claim 9, Chang discloses as described above, The method of claim 8. For the remainder of Claim 9, Chang discloses further comprising determining the GCT balance from the determined left foot GCT or the determined right foot GCT ([0125] “the ground contact time for the right foot may gradually grow more than the ground contact time of the left foot indicating limping.”; [0024])(Examiner notes that a “limping” walk has an unbalanced cadence that is not even in duration between the two feet.) Regarding Claim 10, Chang discloses as described above, The method of claim 1. For the remainder of Claim 10, Chang discloses further comprising: prior to predicting, converting the sensor data from a sensor reference coordinate frame ([0055] “kinematic measurements collected by the activity monitoring device are preferably along a set of orthonormal axes (e.g. an x, y, z, coordinate system)”; Fig 5 and 6) to an inertial reference frame ([0055] “…the axis of measurements may not be aligned with a preferred or assumed coordinate system of the activity”, “…axis of measurement by one or more sensor(s) may be calibrated for analysis.”) Regarding Claim 11, Chang discloses as described above, The method of claim 1. For the remainder of Claim 11, Chang discloses wherein the machine learning model is a neural network ([0084] “neural networks…can be used to characterize the shapes and variability of the motion paths created by the runner.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2 – 3 and 16 – 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chang in view of Vu et. al., “ED-FNN: A New Deep Learning Algorithm to Detect Percentage of the Gait Cycle for Powered Prostheses”, hereinafter Vu. Regarding Claims 2 and 16, Chang discloses as described above, The method of claim 1 and The system of claim 15, respectively. For the remainder of Claims 2 and 16, Chang discloses ground contact time (GCT) per running step ([0075] “the average time duration a foot is in contact with the ground can be reported as a running average of the amount of time a foot is in contact with the ground for each step or stride.”). Chang does not specifically disclose combining multiple predictions of ground contact time (GCT) per running step. Vu teaches a deep learning algorithm LSTM with an exponential window that incorporates sampling from “near” and “far” points of IMU (inertial measurement unit) signal data to form predictions of gait cycle data, including the stance phase (the phase that encompasses a foot’s “ground contact time” in the gait cycle). Specifically for Claims 2 and 16, Vu teaches combining multiple predictions of ground contact time (GCT) per running step ([Page 8, Paragraph 1 - 2] “exponential window”, ‘…obtain a window that includes the knowledge of samples that are close by,..also obtain knowledge of samples that are far away…contribute to the prediction of the cycle trend”; [Page 9] Fig 4, “Gait event prediction”; [Page 3] Figure 1, “Stance Phase” 0 -60%; Table 1; Figure 6)(Examiner notes that the “window” is smaller than a step and is generating running predictions of GCT as it is incorporating predictions from samples “close by” and “far away”) Vu provides a motivation to combine at [Page 15, Paragraph 1] with “a robust walking gait percent detection method that can detect 100 percent of the gait cycle for walking on flat ground“ and [Page 15, Bottom Paragraph] with “our model was built with the purpose of working on scenarios where computational power is an issue. The ED-FNN requires less samples than a normal FNN or RNN. Consequently, it takes less computational power for prediction. Additionally, we also included a forecasting option that allows the network to predict future percentages in the case of a delay in the system.” It would be predictable to use the window LSTM method in any similar gait prediction device that is collecting IMU data and analyzing with machine learning, such as the system and method disclosed by Chang, , as it would continue to operate with the function of predicting GCT from inertial sensor data.. A person having ordinary skill in the art before the effective filing data of the claimed invention would recognize that the prediction of GCT could be adjusted to a window that is smaller than the width of a step, allowing for more robust predictions of the GCT. Further, using the procedure taught by Vu would allow for a less computationally expensive process that can be integrated on a wearable watch device for prediction gait characteristics like GCT. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Chang’s disclosed watch-mounted IMU sensor system for gait characteristics data gathering and machine learning prediction with the deep learning algorithm “exponential window” sampling to predict gait cycle characteristics taught by Vu, creating a single watch-mounted IMU sensor system for gait characteristics data gathering and robust machine learning prediction of GCT from multiple predictions per step. Regarding Claims 3 and 17, Chang discloses as described above, The method of claim 2 and The system of claim 16, respectively. For the remainder of Claim 3, Chang discloses further comprising: averaging the multiple GCT predictions over time ([0075] “the average time duration a foot is in contact with the ground can be reported as a running average of the amount of time a foot is in contact with the ground for each step or stride.”)(Examiner notes that given that the GCT predictions of the combination of Chang and Vu in Claim 2 is a plurality of GCT predictions within a sliding window for a step, performing a “running average of the amount of time a foot is in contact with the ground for each step” would be the average of these GCT prediction values in the step.) Claims 12 – 14 are rejected under 35 U.S.C. 103 as being unpatentable over Chang in view of Sharma “APPLICATION OF MACHINE LEARNING METHODS FOR HUMAN GAIT ANALYSIS”, hereinafter Sharma. Regarding Claim 12, Chang discloses as described above, The method of claim 1. For the remainder of Claim 12, Chang does not specifically disclose wherein the neural network is a long short-term memory (LSTM) neural network. Chang is open to using an LSTM, however, at [0092] with “Such population classification models can include multi-layer neural networks…and deep learning networks to identify common characteristics of fatigue based on the population” Sharma teaches a deep learning prediction system for predicting gait parameters such as GCT, for use with inertial sensor data without requiring foot pressure information during prediction. Specifically for Claim 12, Sharma teaches wherein the neural network is a long short-term memory (LSTM) neural network ([Page 44, Paragraph 4] “LSTM neural network regression models for vGRF and GCT predictions”) Sharma provides a motivation to combine at [Page 49, Paragraph 1] with “a detailed description of novel gait stride segmentation has been presented which does not require any foot pressure information to perform the gait segmentation.” and “The remaining metrics components, i.e. GCT and vGRF, can be estimated by applying the techniques of both machine learning and deep learning.” A person having ordinary skill in the art before the effective filing data of the claimed invention would recognize that using machine learning in the form of a LSTM that can predict GCT would be useful for a wrist-mounted IMU sensor obtaining data for a machine learning model to predict gait characteristics and fatigue. It would have been predictable to use the LSTM in any similar gait prediction device that is collecting IMU data and analyzing with machine learning, such as the system and method disclosed by Chang, as it would continue to operate with the function of predicting GCT from inertial sensor data. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Chang’s disclosed watch-mounted IMU sensor system for gait characteristics data gathering and machine learning prediction with the LSTM model for predicting GCT taught by Sharma, creating a single watch-mounted IMU sensor system for gait characteristics data gathering and LSTM neural network prediction of GCT. Regarding Claim 13, Chang discloses as described above, The method of claim 11. For the remainder of Claim 13, Chang does not disclose wherein the neural network includes a single LSTM with three outputs that uses internal representations learned for gait events to predict GCT. Sharma teaches wherein the neural network includes a single LSTM ([Page 45], Figure 16; [Page 45, 1st Full Paragraph] “Several combinations of LSTM and GRU layers”) with three outputs (Fig 16, “output: (none, 400, 2)”)(Examiner notes that there are three terms in outputs for each of the layers shown in Figure 16, therefore three outputs.) that uses internal representations learned for gait events to predict GCT ([Page 44, Paragraph 4] “LSTM neural network regression models for vGRF and GCT predictions…input features are 3D acceleration and 3D angular rates…feature selection”) The motivation for Claim 13 to combine Chang with Sharma is the same as that described in Claim 12. In summary, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Chang’s disclosed watch-mounted IMU sensor system for gait characteristics data gathering and machine learning prediction with the LSTM model with outputs for predicting GCT taught by Sharma, creating a single watch-mounted IMU sensor system for gait characteristics data gathering and LSTM neural network prediction of GCT. Regarding Claim 14, Chang in view of Sharma discloses as described above, The method of claim 13. For the remainder of Claim 14, Chang does not disclose wherein the LSTM neural network includes an LSTM layer, encoding layers, a number of full-connected layers or dense layer and an output layer Sharma teaches wherein the LSTM neural network includes an LSTM layer ([Page 45] Fig 16, “LSTM” layer; [Page 45, Bottom Paragraph] “..neural network configuration has two LSTM layers”)(Examiner notes that “two LSTM layers” has at least one LSTM layer), encoding layers ([Page 45] Fig 16, “gaussian_noise_1: GaussianNoise” layer; [Page 45, Bottom Paragraph] – [Page 46, Top Paragraph] “The noise layer, which applies additive zero centered gaussian noise, is a way to intentionally corrupt the input data to mitigate the model overfitting and is also known as noise regularization….gaussian layer…”) a number of full-connected layers or dense layer ([Page 45] Fig 16, “dense_1: Dense” layer; [Page 45, Bottom Paragraph] “a fully connected dense layer”) and an output layer ([Page 46, 3rd Full Paragraph] “GCT-label prediction…(at output layer)”). The motivation for Claim 14 to combine Chang with Sharma is the same as that described in Claims 12 and 13. In summary, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Chang’s disclosed watch-mounted IMU sensor system for gait characteristics data gathering and machine learning prediction with the LSTM model with layers for predicting GCT taught by Sharma, creating a single watch-mounted IMU sensor system for gait characteristics data gathering and LSTM neural network with layers for predicting GCT. Response to Arguments Applicant's arguments filed 02 FEBRUARY 2026 have been fully considered but they are not persuasive. Regarding the 35 U.S.C. 101 Rejections: Applicant argues at [Page 6, “Rejections under 35 U.S.C. 101” Section] – [Page 8, 2nd Full Paragraph] that the claimed limitations of Claim 1 do not recite a mental process that can be practically performed in the human mind because “claim limitations that encompass AI in a way that cannot be practically performed in the human mind do not fall within” the mental process grouping. The instant Claim 1 recites a machine learning model in a broad way such that a researcher receiving a spreadsheet of theoretical acceleration and rotation data for a person (as there is no sensor hardware positively recited), plugging numeric acceleration and rotation inputs into an iterative set of textbook-type equations performed repeatedly to refine the solution using their education, background, experience, time, and a pen and paper. As an example, a researcher could receive acceleration and rotation rate data of 0, and make the prediction that the user is stopped and will continue to be stopped. The argument is not persuasive. Applicant argues at [Page 8, 3rd Full Paragraph] that the claimed limitations of Claim 1 do not recite a mathematical calculation. As recited, the “machine learning (ML) model” of claim 1 can encompass a simple, iterative set of equations with acceleration and rotation rate as inputs, and a “gait event time” as an output. Inputting a number (or numbers) into an equation (or multiple equations) to determine an output is a mathematical calculation. The argument is not persuasive. Applicant argues at [Page 8, “Step 2A, Prong Two: Evaluate Whether the Claim is Integrated into a Practical Application” Section] - [Page 10, Paragraph 2] that Claims 1 – 20 are integrated into the practical application of fitness monitoring, and that additional limitations should not be evaluated in a vacuum completely separate from the recited judicial exception. Applicant argues that the additional elements of the processor, memory, were evaluated completely separate from the recited judicial exception elements, failing to take into consideration the interaction and impact of all of the claim limitations on each other. Applicant further argues that the implementation steps in the claims and not the use of a computer improves the practical application of fitness monitoring, citing MCRO, Inc v. Bandai Namco Games America Inc and Kononklijke KPN N.V. v. Gemalto M2M GmbH, 942 f. 3D 1143, 1151 (Fed. Cir. 2019). Regarding MCRO, Inc v. Bandai Namco Games America, 837 F. 3d 1299, 1314 (2016) these claims concern a method for automatically animating lip synchronization and facial expression of three-dimensional characters, including particular timing, “data file”, “intermediate stream of output morph weight sets”, sets of rules, “generating a final stream of output morph weight sets at a desired frame rate from said intermediate stream of output morph weight sets and said plurality of transition parameters” [Page 11 of MCRO, INC. v. BANDAI NAMBO GAMES AMERICA” decision]. At [Page 24 of MCRO, INC. v. BANDAI NAMBO GAMES AMERICA” decision] “It is the incorporation of the claimed rules, not the use of the computer, that “improved [the] existing technological process” by allowing the automation of further tasks.” Regarding Kononklijke KPN N.V. v. Gemalto M2M GmbH, 942 f. 3D 1143, 1151 (Fed. Cir. 2019), these claims concern a device for error-checking data, including a “permutating device configured to perform a permutation of bit position relative to said particular ordered sequence for at least some of the bits in each of said blocks making up said ordinal data without reordering any blocks or original data” [Page 7 - 8 of the Kononklijke KPN N.V. v. Gemalto M2M GmbH decision]. In the decision at [Page 18, Kononklijke KPN N.V. v. Gemalto M2M GmbH], “… because these claims specifically recite how this permutation is used (i.e., modifying the permutation applied to different data blocks), and this specific implementation is a key insight to enabling prior art error detection systems to catch previously undetectable systematic errors…the appeal claims are not directed to an abstract idea…” Each of these cases provide a level of detail and device improvement that is not reflected in the instant, broadly-recited claims. In the instant claims, merely including the abstract idea of predicting a “gait time event” (which could broadly include a researcher receiving a read-out of acceleration and rotation rate data of 0 and making a prediction that the user is stopped and will continue to be stopped) in the context of gait analysis does not improve the performance of the processor, generic machine learning model, or memory. As discussed above, the claims recite limitations that encompass an abstract idea of manipulating variables obtained from electronic components (or a spreadsheet of theoretical data, as there is no sensor hardware positively recited) used in a usual way, and that variable manipulation can be accomplished with the aid of time, equations, and paper. The argument is not persuasive. Applicant argues at [Page 10, Paragraph 3] – [Page 10, Paragraph 5] that the claims were evaluated at a high level of generality, citing PTAB case Ex Parte Guillaume Desjardins et. al., (Patent Tr. & App. Bd.) (Appeal 2024-000567)(September 26, 2025), and that “the clear teachings of Enfish” were not heeded in the present action. The claims of Ex Parte Guillaume concern a method for training a machine learning model including, [Page 2 – 3, Ex Parte Guillaume Desjardins decision] wherein the machine learning model has at least a plurality of parameters and has been trained on a first machine learning task using first training data to determine first values of the plurality of parameters of the machine learning model, and wherein the method comprises: determining, for each of the plurality of parameters, a respective measure of an importance of the parameter to the first machine learning task, comprising: computing, based on the first values of the plurality of parameters determined by training the machine learning model on the first machine learning task, an approximation of a posterior distribution over possible values of the plurality of parameters, assigning, using the approximation, a value to each of the plurality of parameters, the value being the respective measure of the importance of the parameter to the first machine learning task and approximating a probability that the first value of the parameter after the training on the first machine learning task is a correct value of the parameter given the first training data used to train the machine learning model on the first machine learning task; obtaining second training data for training the machine learning model on a second, different machine learning task; and training the machine learning model on the second machine learning task by training the machine learning model on the second training data to adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task, wherein adjusting the first values of the plurality of parameters comprises adjusting the first values of the plurality of parameters to optimize an objective function that depends in part on a penalty term that is based on the determined measures of importance of the plurality of parameters to the first machine learning task. In the decision, it was found that claimed limitations “constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation.” [Page 9] This case also provides a level of detail and device improvement that is not reflected in the instant, broadly-recited claims. In the instant claims, merely including the abstract idea of predicting a “gait time event” (which could broadly include a researcher receiving a read-out of acceleration and rotation rate data of 0, and making a prediction that the user is stopped and will continue to be stopped) in the context of gait analysis does not improve the performance of the machine learning model itself. It is merely used as a tool in its usual way. As discussed above, the claims recite limitations that encompass an abstract idea of manipulating variables obtained from electronic components (or a spreadsheet of theoretical data, as there is no sensor hardware positively recited) used in a usual way, and that variable manipulation can be accomplished with the aid of time, equations, and paper. The argument is not persuasive. Applicant argues at [Page 11, Paragraph 2] – [Page 12, top] that the elements labeled as abstract are not insignificant post-solution activity, well-understood, routine, conventional activity in the field, or appending well-understood, routine, conventional activities previously known to the industry specified at a high level of generality to the judicial exception, since they describe specific implementation steps for “pedestrian dead reckoning”. Pedestrian dead reckoning is used calculate the current position of a moving object, such as a pedestrian, by using a previously-determined position. In the recited claims, it is unclear what to which “specific implementation steps” are being referred to achieve this argued “pedestrian dead reckoning” position tracking. Should it be obtaining…sensor data indicative of acceleration and rotation rate, the obtaining is an abstract idea that includes a researcher of ordinary skill in the art obtaining a spreadsheet or graphical output of acceleration and rotation rate data. The data measurement itself is not positively recited, nor are the associated sensors. Further, the measurement itself would serve as extra-solution data-gathering for subsequent abstract idea steps. If the “specific implementation steps” originate from the identified abstract ideas, From MPEP 2106.05(a): It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception. See MPEP § 2106.04(d) (discussing Finjan, Inc. v. Blue Coat Sys., Inc., 879 F.3d 1299, 1303-04, 125 USPQ2d 1282, 1285-87 (Fed. Cir. 2018)). The argument is not persuasive. Applicant summarily argues summarily at [Page 8, 4th Full Paragraph], [Page 11, Paragraph 1], and [Page 12, 1st Full Paragraph] that Claims 1 – 20 recite significantly more than the judicial except such that the 35 U.S.C. 101 Rejection should be withdrawn. Based on the 35 U.S.C 101 analysis herein and the discussion of arguments above, Claims 1 - 20 do not qualify as eligible subject matter under 35 U.S.C. 101. The argument is not persuasive. Regarding the 35 U.S.C. 102 Rejections: Applicant argues at [Page 12, “Rejections under 35 U.S..C 102” Section, All] that Chang does not predict a gait time event. As cited above, and looking to Chang [0117] Chang discloses “Detection of a change in biomechanical signals, such as ground contact time, sagittal tilt, cadence or motion paths, can additionally or alternatively use various machine learning techniques.” Further, as recited, “gait time event” is a broad term that would encompass predicting any parameter that would affect gait time, including events associated with fatigue found by [0121] “predicting…based on current biomechanical signals”, including [0088] “There could be a non-fatigued state, which may characterize normal performance patterns.”, where [0090] “…particular biomechanical signals as different fatigue states…such as motion paths and associated data…” comprise the information of fatigue state prediction. In a non-fatigued state, the motion path of the “gait time event” is normal. The argument is not persuasive. Applicant argues at [Page 13, Paragraph 1] - [Page 13, Paragraph 2] that Chang teaches away from using machine learning to predict contact time from acceleration and angular rate. Chang discloses that gait [0117] Chang discloses “Detection of a change in biomechanical signals, such as ground contact time, sagittal tilt, cadence or motion paths, can additionally or alternatively use various machine learning techniques.” Further, Chang also teaches that biomechanical signals can be classified as different fatigue states at [0090] with “machine intelligence can build up data used in classifying particular biomechanical signals as different fatigue states.” Then at [0121] “predicting fatigue state based on current biomechanical signals.” occurs, where the biomechanical signals include at [0097] “cadence, vertical oscillation…braking…ground contact time”. The “biomechanical signals”, including ground contact time, can be processed by kinematic data from the IMU with the accelerometer and gyroscope [0032]. Further, “ground contact time” specifically is not recited in Claim 1, as it is “gait time event”, which is broadly any event associated with a gait relative to time (which would include increasing or decreasing pace, for example, which occur based on fatigue state). The argument is not persuasive. Applicant argues at [Page 13, Paragraph 4] – [Page 14, Paragraph 2] that Chang discloses that the preferred location for an inertial sensor is the pelvis, and that Applicant has recognized a problem with prior art methods of determining GCT using a wrist-worn device because of “complex mechanical linkage”. Applicant argues that Chang discloses machine learning to predict a fatigue or non-fatigue state and three prior art techniques of using an IMU data captured at the pelvis or shoe sensors(s) to determine GCT. There is nothing particularly recited in Claim 1 or the dependent the claims that limits sensors to only being at the wrist. The only recitation of “wrist” is ”wrist-worn” in line 2 of Claim 1, relative to the location of the at least one processor that obtains sensor data. The processor can obtain data from a sensor located somewhere else. Further, looking to Chang [0042] “The method preferably utilizes kinematic measurements from an activity monitoring device. Such an activity monitoring device can preferably be worn unobtrusively…” and at [0129] “...systems and methods of the embodiments can be…implemented…by hardware/firmware/software elements of a user…wristband…” Chang also reports biomechanical signal data at the arm in Figure 10, with the dashed line. The argument is not persuasive. Applicant summarily argues at [Page 14, Paragraph 3] that because Chang fails to disclose the described features, Chang fails to anticipate claims 1 and 15 under 35 U.S.C. 102. Based on the 35 U.S.C. rejection and discussion above, Chang discloses all of the features of claims 1 and 15. The argument is not persuasive. Regarding the 35 U.S.C. 103 Rejections: Applicant argues at [Page 14, Paragraph 4] – [Page 15, Paragraph 1] that dependent claims 2 – 14 and 16 - 20 are allowable due to their dependence on Claims 1 and 15 and the previous arguments. Based on the 35 U.S.C. rejection and discussion above, Chang discloses all of the features of claims 1 and 15. The argument is not persuasive. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MELISSA J MONTGOMERY whose telephone number is (571)272-2305. The examiner can normally be reached Monday - Friday 7:30 - 5:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Valvis can be reached at (571) 272 - 4233. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MELISSA JO MONTGOMERY/Examiner, Art Unit 3791 /PATRICK FERNANDES/Primary Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Jun 02, 2023
Application Filed
Sep 04, 2025
Non-Final Rejection — §101, §102, §103
Feb 02, 2026
Response Filed
Mar 11, 2026
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
10%
Grant Probability
35%
With Interview (+25.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month