Prosecution Insights
Last updated: April 19, 2026
Application No. 17/435,866

STATE CONTROL DEVICE, LEARNING DEVICE, STATE CONTROL METHOD, LEARNING METHOD, AND PROGRAM

Non-Final OA §101§103
Filed
Sep 02, 2021
Examiner
RIFKIN, BEN M
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Interactive Entertainment Inc.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 12m
To Grant
59%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
139 granted / 317 resolved
-11.2% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 12m
Avg Prosecution
38 currently pending
Career history
355
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 317 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 17435866 has a total of 14 claims pending in the application, of which claims 10, 12, and 14 have been withdrawn. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9, 11, and 13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1 and 10 are machine type claims. Claims 11 and 12 are process type claims. Claims 13 and 14 are manufacture type claims. Therefore, claims 1-14 are directed to either a process, machine, manufacture or composition of matter. As per claim 1, 2A Prong 1: “generating, using a … output data for the input data… wherein the output data represents an estimation of a posture of a body part to which the tracking device is attached … to generate the output” A user mentally or with pencil and paper looks at the posture of their body and makes a determination of what that pose is. combining the input data with a first state variable output by the … to form combined input data…” The user mentally or with pencil and paper performs the appropriate mathematics to combine the inputs and previous data. “updating the first state variable with the second state variable and processing the output data to generate processed output data, wherein the processing the output data comprises determining a plurality of postures associated with the body part” The user mentally or with pencil and paper updates the variable and notes the different postures of the body part by looking at them. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: “An information processing device”, “A storage unit”, “a processor”, “a tracking device” (mere instructions to apply the exception using a generic computer component); “a neural network”, “the neural network comprises a layer and an output block”, “the layer” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: A generic off the shelf neural network with no additional detail or limitations that make it more than a generic neural network. Every neural network will have some form of layer, and include a means of outputting results); “Acquiring input data corresponding to a series of pieces of sensing data… wherein each piece of sensing data of the series of pieces of sensing data is associated with a sequence number”, “inputting the combined input data… according to the sequence number to which the input data is associated with, wherein the first variable is associated with data previously processed… and indicates characteristics of a time series transition of the data previously processed… and wherein the output data corresponds to a second state variable” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: “An information processing device”, “A storage unit”, “a processor”, “a tracking device” (mere instructions to apply the exception using a generic computer component) “a neural network”, “the neural network comprises a layer and an output block”, “the layer” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: A generic off the shelf neural network with no additional detail or limitations that make it more than a generic neural network. Every neural network will have some form of layer, and include a means of outputting results); “Acquiring input data corresponding to a series of pieces of sensing data… wherein each piece of sensing data of the series of pieces of sensing data is associated with a sequence number”, “inputting the combined input data… according to the sequence number to which the input data is associated with, wherein the first variable is associated with data previously processed… and indicates characteristics of a time series transition of the data previously processed… and wherein the output data corresponds to a second state variable” (MPEP 2106.05(d)(II) indicate that merely “transmitting or receiving data” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed acquiring step is well-understood, routine, conventional activity is supported under Berkheimer). As per claims 2-9, this claim contains additional mental steps of deciding to use the model, and additional generic machine learning aspects, and is rejected similarly to claim 1. As per claim 11, 2A Prong 1: “generating, using a … output data for the input data… wherein the output data represents an estimation of a posture of a body part to which the tracking device is attached … to generate the output” A user mentally or with pencil and paper looks at the posture of their body and makes a determination of what that pose is. combining the input data with a first state variable output by the … to form combined input data…” The user mentally or with pencil and paper performs the appropriate mathematics to combine the inputs and previous data. “updating the first state variable with the second state variable and processing the output data to generate processed output data, wherein the processing the output data comprises determining a plurality of postures associated with the body part” The user mentally or with pencil and paper updates the variable and notes the different postures of the body part by looking at them. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: “a tracking device” (mere instructions to apply the exception using a generic computer component); “a neural network”, “a layer and an output block”, “the neural network” , “machine learning model” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: A generic off the shelf neural network with no additional detail or limitations that make it more than a generic neural network. Every neural network will have some form of layer, and include a means of outputting results); “Acquiring input data corresponding to a series of pieces of sensing data… wherein each piece of sensing data of the series of pieces of sensing data is associated with a sequence number”, “inputting the combined input data… according to the sequence number to which the input data is associated with, wherein the first variable is associated with data previously processed… and indicates characteristics of a time series transition of the data previously processed… and wherein the output data corresponds to a second state variable” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: “a tracking device” (mere instructions to apply the exception using a generic computer component) “a neural network”, “a layer and an output block”, “the neural network” , “machine learning model” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: A generic off the shelf neural network with no additional detail or limitations that make it more than a generic neural network. Every neural network will have some form of layer, and include a means of outputting results); “Acquiring input data corresponding to a series of pieces of sensing data… wherein each piece of sensing data of the series of pieces of sensing data is associated with a sequence number”, “inputting the combined input data… according to the sequence number to which the input data is associated with, wherein the first variable is associated with data previously processed… and indicates characteristics of a time series transition of the data previously processed… and wherein the output data corresponds to a second state variable” (MPEP 2106.05(d)(II) indicate that merely “transmitting and receiving data” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed acquiring step is well-understood, routine, conventional activity is supported under Berkheimer). As per claim 13, 2A Prong 1: “generating, using a … output data for the input data… wherein the output data represents an estimation of a posture of a body part to which the tracking device is attached … to generate the output” A user mentally or with pencil and paper looks at the posture of their body and makes a determination of what that pose is. combining the input data with a first state variable output by the … to form combined input data…” The user mentally or with pencil and paper performs the appropriate mathematics to combine the inputs and previous data. “updating the first state variable with the second state variable and processing the output data to generate processed output data, wherein the processing the output data comprises determining a plurality of postures associated with the body part” The user mentally or with pencil and paper updates the variable and notes the different postures of the body part by looking at them. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: “A non-transitory, computer readable storage medium”, “a tracking device” (mere instructions to apply the exception using a generic computer component); “a neural network”, “the neural network comprises a layer and an output block”, “the layer” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: A generic off the shelf neural network with no additional detail or limitations that make it more than a generic neural network. Every neural network will have some form of layer, and include a means of outputting results); “Acquiring input data corresponding to a series of pieces of sensing data… wherein each piece of sensing data of the series of pieces of sensing data is associated with a sequence number”, “inputting the combined input data… according to the sequence number to which the input data is associated with, wherein the first variable is associated with data previously processed… and indicates characteristics of a time series transition of the data previously processed… and wherein the output data corresponds to a second state variable” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: “A non-transitory, computer readable storage medium”, “a tracking device” (mere instructions to apply the exception using a generic computer component); “a given neural network”, “the neural network (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: A generic off the shelf neural network with no additional detail or limitations that make it more than a generic neural network. Every neural network will have some form of layer, and include a means of outputting results); “Acquiring input data corresponding to a series of pieces of sensing data… wherein each piece of sensing data of the series of pieces of sensing data is associated with a sequence number”, “inputting the combined input data… according to the sequence number to which the input data is associated with, wherein the first variable is associated with data previously processed… and indicates characteristics of a time series transition of the data previously processed… and wherein the output data corresponds to a second state variable” (MPEP 2106.05(d)(II) indicate that merely “transmitting or receiving data” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed acquiring step is well-understood, routine, conventional activity is supported under Berkheimer). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 9, 11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Rasmussen et al (US 20140126759 A1) in view of Zhou et al (“Finger-worn Device Based Hand Gesture Recognition Using Long Shot-term Memory”). As per claims 1, 11, and 13, Rasmussen discloses, “An information processing device, comprising” (pg.5, particularly paragraph 0046; EN: this denotes the hardware to run the system). “A storage unit storing instructions which, when executed by the processor, cause the information processing device to perform operations comprising” (pg.5, particularly paragraph 0046; EN: this denotes the hardware to run the system). “acquiring input data corresponding to a series of pieces of sensing data” (Pg.3-4, particularly paragraph 0035; EN: this denotes taking data in from sensors over time sequences). “measured by a tracking device” (Pg.3-4, particularly paragraph 0035; EN: this denotes the use of sensor electrodes). “generating, using a neural network” (pg.4, particularly paragraph 0036; EN: this denotes recognizing via a neural network). “Output data from the input data, wherein the output data represents an estimation of a posture of a body part to which the tracking device is attached” (Pg.4, particularly paragraph 0036; EN: this denotes using the data to determine gestures (i.e. postures) of the attached body part). “Wherein the neural network comprises a layer and an output block” (Pg.4, particularly paragraph 0036; EN: Any neural network will inherently have multiple layers and an output layer (i.e. output block) of some kind). “Wherein the neural network is configured to generate the output by:” (Pg.4, particularly paragraph 0036; EN: this denotes using the data to determine gestures (i.e. postures) of the attached body part). “inputting the … input data into the layer according to the sequence … to which the input data is associated with…”(Pg.3-4, particularly paragraph 0035; EN: this denotes taking data in from sensors over time sequences). “processing the output data to generate processed output data, wherein the processing the output data comprises determining a plurality of postures associated with the body part” (pg.4, particularly paragraph 0036; EN: this denotes the system being able to recognize multiple gestures). However, Rasmussen fails to explicitly disclose, “wherein each piece of sensing data of the series of pieces of sensing data I s associated with a sequence number”, “(i) combining the input data with a first state variable output by the layer to form combined input data”, “the combined input”, “sequence number”, “wherein the first state variable is associated with data previously processed by the neural network and indicates characteristics of a time series transition of the data previously processed by the neural network and wherein the output data corresponds to a second state variable”, “(iii) updating the first state variable with the second state variable” Zhou discloses, “wherein each piece of sensing data of the series of pieces of sensing data I s associated with a sequence number” and “Sequence number”(Pg.2069, section C, particularly equation 3 and associated paragraphs; EN: this denotes keeping track of each piece of a sequence by number). “(i) combining the input data with a first state variable output by the layer to form combined input data”, “the combined input” (Pg.2070, particularly C1, Second paragraph until the end, C2, particularly the first two paragraphs; EN: this denotes the use of the LSTM recurrent neural network block, with the input at time ‘t’ being current input, and the t-1 inputs being previous inputs that are being used with the current inputs). “wherein the first state variable is associated with data previously processed by the neural network and indicates characteristics of a time series transition of the data previously processed by the neural network and wherein the output data corresponds to a second state variable” and “(iii) updating the first state variable with the second state variable” (Pg.2070, particularly C1, Second paragraph until the end, C2, particularly the first two paragraphs; EN: this denotes the use of the LSTM recurrent neural network block, with the input at time ‘t’ being current input, and the t-1 inputs being previous inputs that are being used with the current inputs. This operates continuously as the system operates, with new data being saved to replace old data and used later as determined by the system). Rasmussen and Zhou are analogous art because both involve gesture recognition. Before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Zhou in order to make use of recurrent neural networks for gesture recognition. The motivation for doing so would be to “effectively model the long-term temporal dependency in a sequence for classification… it has shown impressive performance and stability in sequence prediction problems like … wearable activity recognition” (Zhou, Pg.2068, C2, second paragraph) or in the case of Rasmussen, allow the system to use a Recurrent neural network in order to improve their gesture recognition. Therefore before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Zhou in order to make use of recurrent neural networks for gesture recognition. As per claim 2, Rasmussen discloses, “after acquiring the input data, determining whether to provide the input data to the neural network” (pg.4, particularly paragraph 0040; EN: this denotes ignoring detected movement that is outside the minimum and maximum range of detected gestures in order to avoid triggering of control commands). As per claim 3, Rasmussen discloses, “wherein determining whether to provide the input data to the neural network comprises determining that the input data is to be provided to the neural network” (pg.4, particularly paragraph 0040; EN: this denotes ignoring detected movement that is outside the minimum and maximum range of detected gestures in order to avoid triggering of control commands). Zhou discloses, “storing the output data” (Pg.2070, particularly C1, Second paragraph until the end, C2, particularly the first two paragraphs; EN: this denotes the use of the LSTM recurrent neural network block, with the input at time ‘t’ being current input, and the t-1 inputs being previous inputs that are being used with the current inputs. This operates continuously as the system operates, with new data being saved to replace old data and used later as determined by the system). As per claim 4, Rasmussen discloses, “wherein determining whether to provide the input data to the neural network comprises determining that the input data is not to be provided to the neural network and the operations further comprising” (pg.4, particularly paragraph 0040; EN: this denotes ignoring detected movement that is outside the minimum and maximum range of detected gestures in order to avoid triggering of control commands). “retrieving stored output data and using the stored output data to execute processing on the stored output data” (Pg.4, particularly paragraph 0037; EN: this denotes the system issuing commands and controlling the device with the gestures. The previous gesture will continue to be executed despite ignored movements, such as changing gain, changing volume, changing program selection, etc). As per claim 9, Zhou discloses, “wherein the neural network comprises a long short-term memory model” (Pg.2070, particularly C1, Second paragraph until the end, C2, particularly the first two paragraphs; EN: this denotes the use of the LSTM recurrent neural network block). Claim Rejections - 35 USC § 103 Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Rasmussen et al (US 20140126759 A1) in view of Zhou et al (“Finger-worn Device Based Hand Gesture Recognition Using Long Shot-term Memory”) and further in view of Zhu et al (“Online Hand Gesture Recognition using Neural Network Based Segmentation”). As per claim 5, Rasmussen discloses, “wherein determining whether to provide the input data to the neural network comprises” (pg.4, particularly paragraph 0040; EN: this denotes ignoring detected movement that is outside the minimum and maximum range of detected gestures in order to avoid triggering of control commands). However, Rasmussen fails to explicitly disclose, “determining, whether or not to restrict an update of a state associated with the neural network on a basis of an output generated by a machine learning model when the input data is provided to the machine learning model.” Zhu discloses, “determining, whether or not to restrict an update of a state associated with the neural network on a basis of an output generated by a machine learning model when the input data is provided to the machine learning model” (Pg.2416, C2, Section A; Pg.2417, C1, before section B; EN: this denotes the use of a neural network to detect intended gestures). Rasmussen and Zhu are analogous art because both involve gesture recognition. Before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Zhu in order to allow a machine learning algorithm to be used to determine whether a gesture has been received. The motivation for doing so would be to “spot gestures from daily non-gesture movements” (Zhu, Pg.2416, C2, section A, first paragraph) or in the case of Rasmussen, allow the system to use a machine learning algorithm to determine when a gesture is intended for the system and when it is not in order to avoid false positives. Therefore before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Zhu in order to allow a machine learning algorithm to be used to determine whether a gesture has been received. Claim Rejections - 35 USC § 103 Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Rasmussen et al (US 20140126759 A1) in view of Zhou et al (“Finger-worn Device Based Hand Gesture Recognition Using Long Shot-term Memory”) and further in view of Alameh et al (US 20100295781 A1). As per claim 6, Rasmussen discloses, “wherein determining whether to provide the input data to the neural network comprises” (pg.4, particularly paragraph 0040; EN: this denotes ignoring detected movement that is outside the minimum and maximum range of detected gestures in order to avoid triggering of control commands). However, Rasmussen fails to explicitly disclose, “Determining whether or not to restrict an update of a state associated with the neural network on a basis of a change between the input data and a portion or all of previously acquired input data” Alameh discloses, “Determining whether or not to restrict an update of a state associated with the neural network on a basis of a change between the input data and a portion or all of previously acquired input data” (pg.19, particularly paragraphs 0149-0150; EN: this denotes considering the differences between the current input and the previous input when detecting a new gesture). Rasmussen and Alameh are analogous art because both involve gesture recognition. Before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Alameh in order to consider the immediately previous gesture when looking for new gestures. The motivation for doing so would be to allow the system to “After an identification of a first gesture, the magnitude (absolute value) of a recognition threshold, a detection threshold and/or a clearance threshold… application to detection of a second gesture can be changed from a corresponding threshold applicable to the first gesture” (Alameh, Pg.19, paragraph 0150) or in the case of Rasmussen, allow the system to consider previous gestures when attempting to detect new gestures for the system. Therefore before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Alameh in order to consider the immediately previous gesture when looking for new gestures. As per claim 7, Rasmussen discloses, “wherein determining whether to provide the input data to the neural network comprises” (pg.4, particularly paragraph 0040; EN: this denotes ignoring detected movement that is outside the minimum and maximum range of detected gestures in order to avoid triggering of control commands). However, Rasmussen fails to explicitly disclose, “determining whether or not to restrict an update of a state associated with the neural network on a basis of a change between element sin the input data and element sin previously acquired input data.” Alameh discloses, “determining whether or not to restrict an update of a state associated with the neural network on a basis of a change between element sin the input data and element sin previously acquired input data” (pg.19, particularly paragraphs 0149-0150; EN: this denotes considering the differences between the current input and the previous input when detecting a new gesture). Rasmussen and Alameh are analogous art because both involve gesture recognition. Before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Alameh in order to consider the immediately previous gesture when looking for new gestures. The motivation for doing so would be to allow the system to “After an identification of a first gesture, the magnitude (absolute value) of a recognition threshold, a detection threshold and/or a clearance threshold… application to detection of a second gesture can be changed from a corresponding threshold applicable to the first gesture” (Alameh, Pg.19, paragraph 0150) or in the case of Rasmussen, allow the system to consider previous gestures when attempting to detect new gestures for the system. Therefore before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Alameh in order to consider the immediately previous gesture when looking for new gestures. As per claim 8, Rasmussen discloses, “wherein determining whether to provide the input data to the neural network comprises: (pg.4, particularly paragraph 0040; EN: this denotes ignoring detected movement that is outside the minimum and maximum range of detected gestures in order to avoid triggering of control commands). However, Rasmussen fails to explicitly disclose, “determining whether or not to restrict an update of a state associated with the neural network on a basis of a comparison result between an output of the neural network corresponding to an input of the input data and an input data acquired after the input data in a sequence of input data.” Alameh discloses, “determining whether or not to restrict an update of a state associated with the neural network on a basis of a comparison result between an output of the neural network corresponding to an input of the input data and an input data acquired after the input data in a sequence of input data” (pg.19, particularly paragraphs 0149-0150; EN: this denotes considering the differences between the current input and the previous input when detecting a new gesture). Rasmussen and Alameh are analogous art because both involve gesture recognition. Before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Alameh in order to consider the immediately previous gesture when looking for new gestures. The motivation for doing so would be to allow the system to “After an identification of a first gesture, the magnitude (absolute value) of a recognition threshold, a detection threshold and/or a clearance threshold… application to detection of a second gesture can be changed from a corresponding threshold applicable to the first gesture” (Alameh, Pg.19, paragraph 0150) or in the case of Rasmussen, allow the system to consider previous gestures when attempting to detect new gestures for the system. Therefore before the effective filing date it would have been obvious to one skilled in the art of gesture recognition to combine the work of Rasmussen and Alameh in order to consider the immediately previous gesture when looking for new gestures. Response to Arguments Applicant's arguments with respect to claims 1-9, 11, and 13 have been considered but are moot in view of the new ground(s) of rejection. Conclusion The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEN M RIFKIN whose telephone number is (571)272-9768. The examiner can normally be reached Monday-Friday 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEN M RIFKIN/Primary Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Sep 02, 2021
Application Filed
Oct 28, 2024
Non-Final Rejection — §101, §103
Jan 08, 2025
Applicant Interview (Telephonic)
Jan 08, 2025
Examiner Interview Summary
Jan 13, 2025
Response Filed
Mar 10, 2025
Final Rejection — §101, §103
May 19, 2025
Applicant Interview (Telephonic)
May 19, 2025
Examiner Interview Summary
Jun 03, 2025
Request for Continued Examination
Jun 07, 2025
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541685
SEMI-SUPERVISED LEARNING OF TRAINING GRADIENTS VIA TASK GENERATION
2y 5m to grant Granted Feb 03, 2026
Patent 12455778
SYSTEMS AND METHODS FOR DATA STREAM SIMULATION
2y 5m to grant Granted Oct 28, 2025
Patent 12236335
SYSTEM AND METHOD FOR TIME-DEPENDENT MACHINE LEARNING ARCHITECTURE
2y 5m to grant Granted Feb 25, 2025
Patent 12223418
COMMUNICATING A NEURAL NETWORK FEATURE VECTOR (NNFV) TO A HOST AND RECEIVING BACK A SET OF WEIGHT VALUES FOR A NEURAL NETWORK
2y 5m to grant Granted Feb 11, 2025
Patent 12106207
NEURAL NETWORK COMPRISING SPINTRONIC RESONATORS
2y 5m to grant Granted Oct 01, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
59%
With Interview (+15.6%)
4y 12m
Median Time to Grant
High
PTA Risk
Based on 317 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month