Prosecution Insights
Last updated: April 19, 2026
Application No. 18/894,132

ROBOT, ROBOT CONTROL METHOD AND RECORDING MEDIUM

Non-Final OA §102§112
Filed
Sep 24, 2024
Examiner
MOLNAR, SIDNEY LEIGH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Casio Computer Co. Ltd.
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
7 granted / 13 resolved
+1.8% vs TC avg
Strong +86% interview lift
Without
With
+85.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
31 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 3 is objected to because of the following informalities: Claim 3 recites the limitation “…the at least one processor is configured to, …, the robot to perform an action…” in lines 7-8. The limitation appears to be missing the verb “cause” between “configured to” and “the robot”. Examiner recommends correcting the limitation so that it remains consistent with preceding limitations of claims 1 and 2 which the processor causes the robot to perform an action. The limitation will thus be read as “…the at least one processor is configured to cause, …, the robot to perform an action…”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation “A robot control method, comprising: acquiring a growth parameter indicating a growth of the robot…” in lines 1-2. There is insufficient antecedent basis for this limitation of the claim. The robot control method does not positively claim the robot itself so Applicant should positively claim the robot so that it is clear which robot is being referred to. As such, Examiner will interpret the claim generally to read “…acquiring a growth parameter acquiring a growth of a robot…” such that any robot will sufficiently read on the claim limitation. Examiner notes wherein the claims have been addressed below, in view of the prior art record, as best understood by the Examiner in light of the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejections provided herein. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hasegawa et al. (US 2022/0299999 A1; hereinafter “Hasegawa”; included in Applicant’s IDS which was submitted on March 21, 2025). Regarding claim 1, Hasegawa discloses a robot (“robot 200”; Fig. 1), comprising: at least one memory storing a first table and a second table (“Returning to FIG. 8, among pieces of the data stored in the storage unit 120, emotion data 121, emotion change data 122, a growth table 123, a movement content table 124, the motion table 125, and days-of-growth data 126, which are pieces of characteristic data in the embodiment, will be described in order” [0066]. Thus, a memory stores a first table, i.e., growth table, and a second table, i.e., motion table.), the first table storing a growth parameter indicating a growth of the robot and data indicating behavior information specifying an action of the robot corresponding to the growth parameter in association with each other (“As illustrated in FIG. 12, types of movements performed by the robot 200 according to movement triggers, such as external stimuli detected by the sensor unit 210, and a probability that each of the movements is selected according to a growth value (hereinafter referred to as a “movement selection probability”) are recorded in the growth table 123” [0081]. Thus, there is a first table which stores a growth value, i.e., growth parameter, which indicates the growth of the robot and movement type, i.e., data indicating behavior information specifying an action of the robot corresponding to the growth parameter. The growth table associates growth values with movement type and as such associates the growth parameter and behavior information with each other.), the second table storing data indicating the behavior information and data indicating an action file defining the action of the robot in association with each other (“As illustrated in FIG. 14, the motion table 125 is a table in which how the processing unit 110 controls the twist motor 221 and the up-and-down motor 222 is recorded for each movement type defined by the growth table 123” [0087]. Thus, there is a second table which stores control actions which are associated with the movement type, i.e., data indicating the behavior information. The three cells which indicate Time, Twist Motor, and Up-and-Down Motor are data indicating an action file, as all three such commands contribute to the total action.); and at least one processor configured to acquire the growth parameter and cause, based on data stored in the first table and data stored in the second table, the robot to perform an action corresponding to the acquired growth parameter (“In the embodiment, the largest value among these four character values is used as the growth degree data (growth value) indicating the simulated growth degree of the robot 200. Then, the processing unit 110 performs control such that a movement content of the robot 200 varies with the simulated growth of the robot 200 (as the growth value increases). The data used by the processing unit 110 for this purpose is the growth table 123” [0080]. Thus, there is a control action performed based on the growth value acquired by the processing unit. The control is performed based on the information contained in the growth table, which directly associates a movement type with an action file in the motion table.). Regarding claim 2, Hasegawa discloses the robot according to claim 1, wherein the first table stores, in association with one another, the growth parameter indicating the growth of the robot, the data indicating the behavior information, and data indicating a probability that the behavior information is selected (“As illustrated in FIG. 12, types of movements performed by the robot 200 according to movement triggers, such as external stimuli detected by the sensor unit 210, and a probability that each of the movements is selected according to a growth value (hereinafter referred to as a “movement selection probability”) are recorded in the growth table 123” [0081]. Thus, in addition to associated movement type and growth value, the first table also has data indicating a probability of a behavior being selected based on the growth value.), and the at least one processor is configured to cause, based on the data stored in the first table and the data stored in the second table, the robot to perform an action among a plurality of actions corresponding to the acquired growth parameter (As indicated in the rejection of claim 1, actions are performed based on the movement type among a plurality of movement types shown in the first and second tables each.), the action to be performed by the robot being selected based on the probability (“The movement selection probabilities are set such that basic movements set according to the movement triggers are selected regardless of the character values while the growth value is small, and character movements set according to the character values are selected in the case where the growth value increases. Further, the movement selection probabilities are set such that types of the basic movements that can be selected increase as the growth value increases” [0081]. Thus, different movement types are selected based on a probability associated with a growth value, and as such the action to be performed is additionally selected based on the probability.). Regarding claim 3, Hasegawa discloses the robot according to claim 2, wherein the at least one processor is configured to acquire an external stimulus (“The processing unit 110 acquires detection values detected by various sensors included in the sensor unit 210 as external stimulus data representing external stimuli applied to the robot 200 via the bus line BL” [0055]. Thus, the processor acquires external stimuli.), the first table further contains data indicating a trigger for an action corresponding to the external stimulus and stores, in association with one another, data indicating the trigger, the growth parameter, the data indicating the behavior information, and the data indicating the probability that the behavior information is selected (“As illustrated in FIG. 12, types of movements performed by the robot 200 according to movement triggers, such as external stimuli detected by the sensor unit 210, and a probability that each of the movements is selected according to a growth value (hereinafter referred to as a “movement selection probability”) are recorded in the growth table 123” [0081]. Thus, in addition to the growth value, the movement type (behavior information), and the probability that a movement type will be selected, the first table additionally stores a movement trigger for a specific set of movement types which corresponds to the external stimuli which is detected.), and the at least one processor is configured to, upon detecting the external stimulus, the robot to perform an action based on the trigger (“Then, the processing unit 110 executes the movement selection process using information on the external stimulus acquired in step S103 as the movement trigger (step S106), and then, proceeds to step S108” [0110]. Thus, the movement type is selected upon detecting the external stimulus such that the robot performs an action based on the trigger which determines the movement type.). Regarding claim 4, Hasegawa discloses the robot according to claim 1, wherein the second table contains sound data indicating an animal sound corresponding to the behavior information (In Fig. 14, i.e., second table, there is sound data which is associated with each movement type.), and the at least one processor is configured to cause the robot to perform an action based on the action file and the sound data that correspond to the behavior information (The corresponding action in the motion table is inclusive of the action file (determined as the Time, Twist Motor, and Up-and-Down Motor in the rejection of claim 1) and additionally the sound data.). Regarding claim 5, Hasegawa discloses the robot according to claim 1, wherein the at least one processor is configured to set a parameter indicating a pseudo-emotion and a pseudo-personality of the robot (“Then, the processing unit 110 sets the emotion data 121 according to the emotion change data 122 acquired in step S104 (step S105)” [0095]. “First, the processing unit 110 determines whether it is in the first period (step S200). in the first period (step S200; Yes), the processing unit 110 calculates the character value from the emotion change data 122 learned in step S113 (step S201)” [0124]. “On the other hand, if it is not in the first period (step S200; No), the processing unit 110 calculates the corrected character value based on the emotion change data 122 learned in step S113 and the character correction value adjusted in step S112 (step S209)” [0125]. Thus, the emotion data 121 is set as indicating a pseudo-emotion such as those emotions shown in Fig. 10, and the character value is set as indicating a pseudo personality of the robot such as those personalities shown in Fig. 11.), the second table contains data indicating an emotion change amount corresponding to the behavior information (The emotion change data is determined based on external stimuli (see [0094]), which then determines a movement type corresponding to the external stimuli (see Fig. 13 showing movement type and the external stimulus associated with the movement type), and then such external stimulus is included in the motion table, i.e., second table, shown in Fig. 14. Thus, the second table contains data indicating an emotion change amount which corresponds directly to the external stimulus which determines the movement type, i.e., behavior information.), and the at least one processor is configured to update an emotion parameter based on the emotion change amount corresponding to the behavior information (“Then, the processing unit 110 sets the emotion data 121 according to the emotion change data 122 acquired in step S104 (step S105)” [0095]. Thus, the emotion data, i.e., emotion parameter, is set, i.e., updated, in each iteration based on the emotion change data corresponding to the behavior information selected based on the external stimulus condition indicating the emotion change data.), set a personality parameter indicating the pseudo-personality of the robot in accordance with the update of the emotion parameter (The character correction value adjustment process shown in Fig. 17 indicates how the personality parameter, i.e., character value, which indicates the pseudo-personality of the robot, is updated depending on the emotion data of Fig. 10.), and set the pseudo-personality based on the personality parameter (Growth values are determined as the maximum character value, i.e., personality parameter, and each such personality is then weighted to determine the pseudo-personality for character movements (see [0128-0130]).). Regarding claim 6, Hasegawa discloses the robot according to claim 1, wherein the at least one processor is configured to set a parameter indicating a pseudo-emotion and a pseudo-personality of the robot (“Then, the processing unit 110 sets the emotion data 121 according to the emotion change data 122 acquired in step S104 (step S105)” [0095]. “First, the processing unit 110 determines whether it is in the first period (step S200). in the first period (step S200; Yes), the processing unit 110 calculates the character value from the emotion change data 122 learned in step S113 (step S201)” [0124]. “On the other hand, if it is not in the first period (step S200; No), the processing unit 110 calculates the corrected character value based on the emotion change data 122 learned in step S113 and the character correction value adjusted in step S112 (step S209)” [0125]. Thus, the emotion data 121 is set as indicating a pseudo-emotion such as those emotions shown in Fig. 10, and the character value is set as indicating a pseudo personality of the robot such as those personalities shown in Fig. 11.), the first table contains data indicating an emotion change amount corresponding to a trigger (According to the movement selection process (Fig. 16), character values which contain data indicating the emotion change amount, i.e., emotion change data, are used to determine growth values. “Then, the processing unit 110 acquires the emotion change data 122 to be added to or subtracted from the emotion data 121 in response to the external stimulus acquired in step S103 (step S104)” [0094]. The emotion change data is determined in response to the external stimulus, i.e., trigger, and thus the first table which contains the growth values based on the character value contains data indicating an emotion change amount corresponding to the external stimulus trigger.), and the at least one processor is configured to update an emotion parameter based on the emotion change amount corresponding to the trigger (“Then, the processing unit 110 sets the emotion data 121 according to the emotion change data 122 acquired in step S104 (step S105)” [0095]. Thus, the emotion data, i.e., emotion parameter, is set, i.e., updated, in each iteration based on the emotion change data corresponding to the trigger.), set a personality parameter indicating the pseudo-personality of the robot in accordance with the update of the emotion parameter (The character correction value adjustment process shown in Fig. 17 indicates how the personality parameter, i.e., character value, which indicates the pseudo-personality of the robot, is updated depending on the emotion data of Fig. 10.), and set the pseudo-personality based on the personality parameter (Growth values are determined as the maximum character value, i.e., personality parameter, and each such personality is then weighted to determine the pseudo-personality for character movements (see [0128-0130]).). Regarding claim 7, Hasegawa discloses the robot according to claim 5, wherein the personality parameter includes a plurality of personality values that express degrees of mutually different personalities (Fig. 11 shows the set character values, i.e., personality parameters, which includes the plurality of character values that express the degrees of mutually different personalities such as happy, shy, active, and wanted.), and the at least one processor is configured to set a pseudo-personality of the robot based on the personality values (As described in [0128-0130], the pseudo-personality of the robot is set based on a probabilistic weighting of character values, for setting the pseudo-personality of the robot during character movements.). Regarding claim 8, Hasegawa discloses the robot according to claim 5, wherein the behavior information contained in the first table includes behavior information depending on the pseudo-personality of the robot (The movement type, i.e., behavior information, includes character movements which are behaviors which depend on the character values, i.e., values indicating the pseudo-personality of the robot. See Fig. 13 for examples of how the character type, i.e., pseudo-personality, contributes to the movement content which is included in the action based on the character movement type.), and the at least one processor is configured to select, based on the set pseudo-personality of the robot, the behavior information corresponding to the personality, and cause the robot to perform an action based on the selected behavior information (“Then, the processing unit 110 selects the character movement using a random number based on the selection probability of each of the characters acquired in step S206 (step S207)” [0130]. “Next, the processing unit 110 executes the movement selected in step S204 or S207 (step S208), ends the movement selection process, and proceeds to step S108 in the movement control process” [0131]. Thus, when the character movement is selected, a behavior information corresponding to the personality is selected and the robot performs an action based on this selected behavior information. See additionally the example action in the motion table (Fig. 14) which selected a character movement corresponding to “happy” character.). Regarding claim 9, Hasegawa discloses the robot according to claim 1, wherein the growth parameter contains data indicating an elapsed time from a reference date and time (“In the embodiment, the largest value among these four character values is used as the growth degree data (growth value) indicating the simulated growth degree of the robot 200” [0080]. “Assuming that the first period is, for example, a period of 50 days since the simulated birth of the robot 200 (for example, at the time of the first activation by the user after purchase), the processing unit 110 determines that it is in the first period if the days-of-growth data 126 is 50 or less. If it is not in the first period (step S111; No), the processing unit 110 executes the character correction value adjustment process (step S112), and proceeds to step S115. Note that details of the character correction value adjustment process will be described later” [0115]. “For example, as this predetermined condition, a condition that “the largest value among the four character values is a predetermined value or more as the growth degree data representing the simulated growth degree of the robot 200” may be used instead of a condition that “the days-of-growth data 126 is equal to or more than a predetermined value” or together with this condition (under the OR condition). Further, this growth degree data may be set according to the number of days, may be set according to the number of times the external stimulus has been detected, may be set according to the character value, or may be set according to a value obtained by combining them (for example, the sum, an average value, or the like of them)” [0146]. Thus, the growth degree data, i.e., the growth parameter, is influenced by the character data which is adjusted based on an elapsed days-of-growth or number of days metric. Thus, the growth degree data contains data indicating an elapsed time from a reference date and time (simulated birth of the robot, [0070]). See additionally Fig. 15, S110-115 which indicates a determined character correction value adjustment process.). Regarding claim 10, Hasegawa discloses a robot control method (Fig. 15 displays a movement control process, i.e., method, for controlling the robot 200.), comprising: acquiring a growth parameter indicating a growth of the robot, referencing at least one memory storing a first table and a second table (“Next, the processing unit 110 calculates the largest numerical value among these character values as the growth value (step S202). Then, the processing unit 110 refers to the growth table 123, and acquires the movement selection probability of each of the movement types corresponding to the movement trigger given at the time of executing the movement selection process and the growth value calculated in step S202 (step S203)” [0126]. A growth value indicating the growth of the robot is thus acquired by referring to the growth table, i.e., first table, which shows selected movement types corresponding to the motion table, i.e., second table. The growth and motion tables are stored in a memory (see [0066]).), the first table storing a growth parameter indicating a growth of the robot and data indicating behavior information specifying an action of the robot corresponding to the growth parameter in association with each other (“As illustrated in FIG. 12, types of movements performed by the robot 200 according to movement triggers, such as external stimuli detected by the sensor unit 210, and a probability that each of the movements is selected according to a growth value (hereinafter referred to as a “movement selection probability”) are recorded in the growth table 123” [0081]. Thus, there is a first table which stores a growth value, i.e., growth parameter, which indicates the growth of the robot and movement type, i.e., data indicating behavior information specifying an action of the robot corresponding to the growth parameter. The growth table associates growth values with movement type and as such associates the growth parameter and behavior information with each other.), the second table storing data indicating the behavior information and data indicating an action file defining the action of the robot in association with each other (“As illustrated in FIG. 14, the motion table 125 is a table in which how the processing unit 110 controls the twist motor 221 and the up-and-down motor 222 is recorded for each movement type defined by the growth table 123” [0087]. Thus, there is a second table which stores control actions which are associated with the movement type, i.e., data indicating the behavior information. The three cells which indicate Time, Twist Motor, and Up-and-Down Motor are data indicating an action file, as all three such commands contribute to the total action.); and causing, based on data stored in the first table and data stored in the second table, the robot to perform an action corresponding to the acquired growth parameter (“In the embodiment, the largest value among these four character values is used as the growth degree data (growth value) indicating the simulated growth degree of the robot 200. Then, the processing unit 110 performs control such that a movement content of the robot 200 varies with the simulated growth of the robot 200 (as the growth value increases). The data used by the processing unit 110 for this purpose is the growth table 123” [0080]. Thus, there is a control action performed based on the growth value acquired by the processing unit. The control is performed based on the information contained in the growth table, which directly associates a movement type with an action file in the motion table.). Regarding claim 11, Hasegawa discloses a non-transitory computer-readable recording medium storing a program causing a computer to (“The ROM stores the program to be executed by the CPU of the processing unit 110 and data necessary for executing the program in advance” [0052]. Thus, the ROM, i.e., non-transitory computer-readable recording medium, stores a program which is executed by a CPU and causes the computer to perform the methods of the program as will be described below.): acquire a growth parameter indicating a growth of a robot; reference at least one memory storing a first table and a second table (“Next, the processing unit 110 calculates the largest numerical value among these character values as the growth value (step S202). Then, the processing unit 110 refers to the growth table 123, and acquires the movement selection probability of each of the movement types corresponding to the movement trigger given at the time of executing the movement selection process and the growth value calculated in step S202 (step S203)” [0126]. A growth value indicating the growth of the robot is thus acquired by referring to the growth table, i.e., first table, which shows selected movement types corresponding to the motion table, i.e., second table. The growth and motion tables are stored in a memory (see [0066]).), the first table storing a growth parameter indicating a growth of the robot and data indicating behavior information specifying an action of the robot corresponding to the growth parameter in association with each other (“As illustrated in FIG. 12, types of movements performed by the robot 200 according to movement triggers, such as external stimuli detected by the sensor unit 210, and a probability that each of the movements is selected according to a growth value (hereinafter referred to as a “movement selection probability”) are recorded in the growth table 123” [0081]. Thus, there is a first table which stores a growth value, i.e., growth parameter, which indicates the growth of the robot and movement type, i.e., data indicating behavior information specifying an action of the robot corresponding to the growth parameter. The growth table associates growth values with movement type and as such associates the growth parameter and behavior information with each other.), the second table storing data indicating the behavior information and data indicating an action file defining the action of the robot in association with each other (“As illustrated in FIG. 14, the motion table 125 is a table in which how the processing unit 110 controls the twist motor 221 and the up-and-down motor 222 is recorded for each movement type defined by the growth table 123” [0087]. Thus, there is a second table which stores control actions which are associated with the movement type, i.e., data indicating the behavior information. The three cells which indicate Time, Twist Motor, and Up-and-Down Motor are data indicating an action file, as all three such commands contribute to the total action.); and cause, based on data stored in the first table and data stored in the second table, the robot to perform an action corresponding to the acquired growth parameter (“In the embodiment, the largest value among these four character values is used as the growth degree data (growth value) indicating the simulated growth degree of the robot 200. Then, the processing unit 110 performs control such that a movement content of the robot 200 varies with the simulated growth of the robot 200 (as the growth value increases). The data used by the processing unit 110 for this purpose is the growth table 123” [0080]. Thus, there is a control action performed based on the growth value acquired by the processing unit. The control is performed based on the information contained in the growth table, which directly associates a movement type with an action file in the motion table.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Other similar references which were briefly considered but deemed unnecessary in rejecting the current claims are included in the “Notice of References Cited” form attached to the file (PTO-892). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY L MOLNAR whose telephone number is (571)272-2276. The examiner can normally be reached 8 A.M. to 3 P.M. EST Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jonathan (Wade) Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.L.M./Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600039
ROBOT, CONVEYING SYSTEM, AND ROBOT-CONTROLLING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12533807
ROBOTIC APPARATUS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12479098
SURGICAL ROBOTIC SYSTEM WITH ACCESS PORT STORAGE
2y 5m to grant Granted Nov 25, 2025
Patent 12384048
TRANSFER APPARATUS
2y 5m to grant Granted Aug 12, 2025
Patent 12376922
TOOL HEAD POSTURE ADJUSTMENT METHOD, APPARATUS AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+85.7%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month