Prosecution Insights
Last updated: April 19, 2026
Application No. 17/972,854

PROPRIOCEPTIVE LEARNING

Non-Final OA §103
Filed
Oct 25, 2022
Examiner
NILSSON, ERIC
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Honda Motor Co. Ltd.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
408 granted / 494 resolved
+27.6% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
31 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
25.3%
-14.7% vs TC avg
§103
38.8%
-1.2% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 494 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to claims filed 31 December 2025 for application 17972854 filed 25 October 2022. Currently claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 31 December 2025 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 5-8, 10-12, 14-15, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Su et al. (Learning Manipulation Graphs from Demonstrations Using Multimodal Sensory Signals) in view of Kranski et al. (WO 2022212916 A1) further in view of Chen et al. (NON-RECURSIVE GRAPH CONVOLUTIONAL NETWORKS). Regarding claims 1, 11 and 17, Su discloses: A system for proprioceptive learning, comprising: a memory storing one or more instructions; a processor executing one or more of the instructions stored on the memory to (Fig 1 Robot comprises a computer having processor and memory) perform: receiving a set of sensor reading data from a set of sensors (“The graph generation process is initialized from demonstrations, as shown in Fig. 2A. Our experimental setup consists of a 7-DOF Barrett WAM arm and a Barrett hand, which is equipped with two biomimetic tactile sensors (BioTacs) [27]. We demonstrate manipulation tasks through two types of demonstrations: kinesthetic demonstrations and teleoperated demonstrations. In the kinesthetic demonstrations, the human expert demonstrates tasks by directly moving the robot arm. In the teleoperated demonstration, the human operates the bi-manual robot by manually moving the robot’s master arm where the slave arm mimics the movements of the master arm to manipulate the objects. More details can be found in Figs. 3–5.” P3 §III.A ¶1 Su); receiving a set of sensor position data associated with the set of sensors (“Multimodal haptic signals, including proprioceptive signals and both low and high frequency tactile signals, are captured throughout human demonstration. The proprioceptive signals are the 6D Cartesian position and orientation of the robot’s end-effector ypos ∈ R 6 derived from the robot’s forward kinematics. We also recorded the 6D Cartesian pose of the object in the robot’s surroundings with a Vicon motion capture system yobj ∈ R6” P3 §III.A ¶2); constructing a first graph representation based on the set of sensor reading data (Fig 2 Segments Clustering and Skill Graph); constructing a second graph representation based on the set of sensor position data (Fig 3 Manipulation Graph); performing … message passing operation between nodes of the first graph representation and the second graph representation to update the first graph representation and the second graph representation (p3 §III.A last ¶, message passing is used); and executing a task based on readouts from the updated first graph representation and the updated second graph representation (“After generating the manipulation graph, the robot can perform the task through graph traversal by executing a sequence of skills in the graph. It can also confirm successful or failed skill executions by clustering the tactile sensory signals at the end the skill execution against the corresponding discovered success and failure modes.” P4 §III.E ¶2). However, Su does not explicitly disclose: indicative of points from an object point cloud of an object in contact with at least some of the set of sensors; indicative of a geometric arrangement associated with the set of sensors; non-recursive message passing operation; indicative of points from the object point cloud; indicative of the geometric arrangement associated with the set of sensors. Kranski teaches: indicative of points from an object point cloud of an object in contact with at least some of the set of sensors (“A learning model may be configured to update setpoints for robot actuators based on those vectors (e.g., based on their latent space embedding). Some embodiments may control robots with an even more expansive ensemble of such models, e.g., pipelining a convolutional neural network (or vision transformer) that extracts features from 2D image data, a geometric deep learning model that extracts features from 3D point clouds from depth sensors, and an encoder model that maps both sets of those features for a given time slice into respective vectors in latent embedding spaces, and a reinforcement learning model that controls the robot (e.g., outputs a time series of target setpoints of a plurality of actuators) based on a time-series of those vectors, each vector representing a time-slice or robot and environment state.” [0017], see also [0032] and [0126]); indicative of a geometric arrangement associated with the set of sensors [0017], [0032], [0126]; indicative of points from the object point cloud [0017], [0032], [0126]; indicative of the geometric arrangement associated with the set of sensors [0017], [0032], [0126]. Su and Kranski are in the same field of endeavor of sensors for robots and are analogous. Su discloses two graphs for processing sensory and position information for controlling a robot. Kranski teaches the use of point clouds for sensors and geometric arrangements (features) based on the point cloud for robot control. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the known graphs for robot control disclosed by Su with the known point cloud and geometric arrangements for robot control as taught by Kranski to yield predictable results. Chen teaches: non-recursive message passing operation (Fig 1, note: Chen teaches that a recursive message passing graph network can be converted to a non-recursive one to yield the same result). Su, Kranski and Chen are in the same field of endeavor of learning models and are analogous. Su discloses two graphs for processing sensory and position information for controlling a robot. Kranski teaches the use of point clouds for sensors and geometric arrangements (features) based on the point cloud for robot control. Chen teaches that a recursive message passing graph neural network can instead be implemented non-recursively. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the known graph networks for robot control as taught by Su and Kranski with the known non-recursive graph networks taught by Chen to yield predictable results. Regarding claims 2 and 18, Su discloses: The system for proprioceptive learning of claim 1, wherein the set of sensors includes a force sensor, a temperature sensor, a pressure sensor, a tactile sensor, or an image capture sensor (p3 §III.A ¶1 tactile sensor). Regarding claims 5 and 14, Su discloses: The system for proprioceptive learning of claim 1, wherein the second graph representation is a body graph indicative of a geometric arrangement associated with the set of sensors at a time step (“Given a skill graph and the modes for the manipulation task discovered from both successful and failed skill executions, a unified manipulation graph can be created for the robot, as shown in Fig. 2F. The large rectangles in a manipulation graph correspond to the unique modes discovered by skill replays and exploration. The directed edges indicate the transition probabilities between the vertices in the graph. Some skills result in the robot remaining in the same mode while others result in switching into different modes, as indicated by the connections within the same mode or between different modes. After generating the manipulation graph, the robot can perform the task through graph traversal by executing a sequence of skills in the graph. It can also confirm successful or failed skill executions by clustering the tactile sensory signals at the end the skill execution against the corresponding discovered success and failure modes.” P4 §III.E, Fig 9 each node in the skill graph represents position relative to the task at a time step). Regarding claims 6 and 15, Su discloses: The system for proprioceptive learning of claim 1, wherein the task is a pose estimation task or a stability prediction task (“Once we have a skill graph and discovered the corresponding modes for the grasping task, a manipulation graph can be formed by combining them. As shown in Fig. 9, starting state (s0) is in mode 1, and a sequence of actions a1 reaching to the object and a2 forming a pinch graph on the object result in states s1 and s2 respectively, which are clustered into the same mode as s0. Then action a3, closing both fingers on the object, causes a mode switch as state s31 is in mode 2. Executing action a4 lifts the object off of the table and action a5 places the object back onto the table but does not cause any detected mode switches because both states s41 and sf are still in mode 2, where sf is the final state. Executing action a3 could also result in failure mode 1 which is represented as failure state s32 in red in Fig. 9. After discovering this failure mode, continuing to execute the next action a4 either stays in the same failure mode or results in an additional failure mode 2, which is clustered together with mode 1. This corresponds to the object slipping out of robot’s fingers when it attempts to lift the object off the table” p7 §IV.D ¶1, the actions and positions are interpreted as a pose estimation task). Regarding claim 7, Su discloses: The system for proprioceptive learning of claim 1, comprising the set of sensors that receive the set of sensor reading data (“Multimodal haptic signals, including proprioceptive signals and both low and high frequency tactile signals, are captured throughout human demonstration. The proprioceptive signals are the 6D Cartesian position and orientation of the robot’s end-effector ypos ∈ R 6 derived from the robot’s forward kinematics. We also recorded the 6D Cartesian pose of the object in the robot’s surroundings with a Vicon motion capture system yobj ∈ R6” P3 §III.A ¶2). Regarding claim 8, Su discloses: The system for proprioceptive learning of claim 1, comprising one or more actuators executing the task based on the readouts (fig 5 and 8, robot has end effector/actuator). Regarding claim 10, Su discloses: The system for proprioceptive learning of claim 1, wherein the processor performs multiple rounds of message passing operation between nodes of the first graph representation and the second graph representation to update the first graph representation and the second graph representation (p3 §III.A last ¶, message passing is used for each time step). Claim(s) 3, 4, 9, 12, 13, 16 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Su in view of Kranski and Chen and further in view of Dong et al. (Graph Neural Networks in IoT: A Survey). Regarding claims 3, 12 and 19, Su does not explicitly disclose: The system for proprioceptive learning of claim 1, wherein the processor performs feature extraction to generate point cloud positions and feature embeddings based on the set of sensor reading data and wherein the processor constructs the first graph representation based on the point cloud positions and the feature embeddings. Dong teaches: wherein the processor performs feature extraction to generate point cloud positions and feature embeddings based on the set of sensor reading data and wherein the processor constructs the first graph representation based on the point cloud positions and the feature embeddings (“The basic node features can be directly extracted from the tracked trajectories. Spatial coordinates of the objects in the agents’ world coordinate frames are used as node attributes in [66, 147, 190], providing positional information and relative distance between agents. Authors of the works [41, 143, 260] use manually designed features to describe the state of each node at each time step, such as position, velocity and acceleration. These handcrafted features can provide more detailed information about the agents’ state from both spatial and temporal perspectives.” P13 §4.1.2 ¶2, “One of the benefits of LiDARs is that it can be used to provide precise and accurate localization and mapping, since they can produce a high-resolution densely spaced network of elevation points, referred as point clouds.” P12 ¶1, note: point clouds can be used for location data which is extracted to create feature for graph neural networks). Su, Kranski, Chen and Dong are in the same field of endeavor of graphs for sensor data and are analogous. Su discloses two graphs for processing sensory and position information for controlling a robot. Kranski teaches the use of point clouds for sensors and geometric arrangements (features) based on the point cloud for robot control. Chen teaches that a recursive message passing graph neural network can instead be implemented non-recursively. Dong teaches various known methods of using graph neural networks for processing signals. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the graphs with message passing as disclosed by Su, Kranski and Chen with the known graph neural network and point cloud feature extraction as taught by Dong to yield predictable results. Regarding claims 4, 13 and 20, Su does not explicitly disclose: The system for proprioceptive learning of claim 1, wherein the first graph representation is a world graph indicative of points from an object point cloud of an object in contact with at least some of the set of sensors at a time step. Dong teaches: wherein the first graph representation is a world graph indicative of points from an object point cloud of an object in contact with at least some of the set of sensors at a time step (“The basic node features can be directly extracted from the tracked trajectories. Spatial coordinates of the objects in the agents’ world coordinate frames are used as node attributes in [66, 147, 190], providing positional information and relative distance between agents. Authors of the works [41, 143, 260] use manually designed features to describe the state of each node at each time step, such as position, velocity and acceleration. These handcrafted features can provide more detailed information about the agents’ state from both spatial and temporal perspectives.” P13 §4.1.2 ¶2, “One of the benefits of LiDARs is that it can be used to provide precise and accurate localization and mapping, since they can produce a high-resolution densely spaced network of elevation points, referred as point clouds.” P12 ¶1). Regarding claims 9 and 16, Su does not explicitly disclose: The system for proprioceptive learning of claim 1, wherein the performing the message passing operation between nodes of the first graph representation and the second graph representation is based on a hierarchical graph neural network (GNN). Dong teaches: wherein the performing the message passing operation between nodes of the first graph representation and the second graph representation is based on a hierarchical graph neural network (GNN) (“For example, in traffic networks, the installed traffic camera and ultrasound sensors that continuously collecting data are the nodes in this case. In the papers that we surveyed, Zhang et al. [318] proposed a novel semi-supervised hierarchical recurrent graph neural network (SHARE) for predicting city-wide parking availability by developing a graph structure from sensors such as camera, ultrasonic and GPS.” P23, note: graph neural networks and specifically a hierarchical graph neural network can be used for position data). Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC NILSSON whose telephone number is (571)272-5246. The examiner can normally be reached M-F: 7-3. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571)-272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC NILSSON/ Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Oct 25, 2022
Application Filed
Jul 15, 2025
Non-Final Rejection — §103
Sep 26, 2025
Response Filed
Nov 02, 2025
Final Rejection — §103
Dec 31, 2025
Request for Continued Examination
Jan 20, 2026
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602587
MULTI-TASK DEEP LEARNING NETWORK AND GENERATION METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602615
EVALUATION OF MACHINE LEARNING MODELS USING AGREEMENT SCORES
2y 5m to grant Granted Apr 14, 2026
Patent 12591762
METHOD, SYSTEM FOR ODOR VISUAL EXPRESSION BASED ON ELECTRONIC NOSE TECHNOLOGY, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12585942
METHOD AND SYSTEM FOR MACHINE LEARNING AND PREDICTIVE ANALYTICS OF FRACTURE DRIVEN INTERACTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12585953
RADIO SIGNAL IDENTIFICATION, IDENTIFICATION SYSTEM LEARNING, AND IDENTIFIER DEPLOYMENT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+18.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 494 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month