Prosecution Insights
Last updated: April 19, 2026
Application No. 18/268,494

ROBOT SYSTEM AND ROBOT WORKING METHOD

Non-Final OA §103
Filed
Jun 20, 2023
Examiner
DANG, TRANG THANH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kawasaki Jukogyo Kabushiki Kaisha
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
75%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
16 granted / 36 resolved
-7.6% vs TC avg
Strong +31% interview lift
Without
With
+30.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
21.0%
-19.0% vs TC avg
§112
28.7%
-11.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/2/2025 has been entered. Status of Claims Claims 1-8 are pending in the instant application. Response to Amendment/Arguments Applicant’s arguments filed on 12/05/2025 have been fully considered as below. Applicant’s arguments with respect to the object of claim 6, see page 5 of Remarks, have been fully considered and are persuasive in light of the amendments. The object of claim 6 has been withdrawn. Applicant’s arguments with respect to the rejections under 35 USC 103 to the claims, see pages 5-12 of Remarks, have been considered but are moot in view of the new grounds of rejection provided below, in light of newly found prior art, which was necessitated based on Applicant’s amendments which changed the scope of the claims. Drawings The drawings are objected to under 37 CFR 1.83(a) because they fail to show “the arm imitation parts 160a of the self-propelled robot simulated images 160 are disposed in the left end part and the right end part of the upper end part of the circumference situation image 50 so that they are connected with the tip-end parts 50a of the robotic arms 121A and 121B displayed in the circumference situation image 50” as described in the specification (published paragraph [0102]). The reference number 160/160a/50a shows unclear structure/details. It is hard to understand what parts of the robot are illustrated by the references number 160/160a/50a in figure 7. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: The instant specification states, “Further, the robot system 100 is configured so that, when the synthesized image generator 116 generates the synthesized image 701 of the first person viewpoint which is looked from the self-propelled robot 1, the simulated image generator 115 generates the self-propelled robot simulated image 160 so that the arm imitation part 160a which imitates at least a part of the portion of the robotic arms 121A and 121B of the self-propelled robot 1 in the self-propelled robot simulated image 160, which are not displayed in the circumference situation image 50, are connected with the part 50a of the robotic am displayed in the circumference situation image, and the synthesized image generator 116 generates the synthesized image 50 of the first person viewpoint so that the arm imitation part 160a of the generated self-propelled robot simulated image 160 is connected with the part 50a of the robotic arm displayed in the circumference situation image 50” (published paragraph [0125]). It is unclear whether reference number 50 is a circumference situation image or a synthesized image of the first viewpoint. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Kurosawa (US 11926994 B2), and further in view of Sakuta et al. (US 11946223 B2, hereinafter “Sakuta”). Regarding claim 1, Kurosawa teaches a robot system (Kurosawa, see at least Fig. 14, col. 50, lines 47-56, an excavator management system SYS), comprising: a self-propelled robot including a robotic arm having one or more joints (Kurosawa, see at least Fig. 1A, see at least Figs. 1A-B, cols. 2-3, col. 25, lines 43-56, col. 8, lines 18-30, col. 43, lines 50-61, an autonomous excavator 100 including an attachment AT having one or more joints); a manipulating part that accepts operation by an operator to allow the operator to manipulate the self-propelled robot (Kurosawa, see at least Fig. 14, see at least cols. 51-52, “In this case, the user may use an operation input means (e.g., a touch panel, a touch pad, a joystick, etc.) mounted on the terminal apparatus 200 or communicatively connected to the terminal apparatus 200”); a display visible by the operator (Kurosawa, see at least Fig. 14, see at least cols. 51-52, “Specifically, the terminal apparatus 200 displays the image information captured by the imaging device 80 distributed from the management apparatus 300 or the excavator 100 on the display device, and the user may perform remote operation of the excavator 100 while viewing the image information”); a plurality of circumference cameras that are mounted around the self-propelled robot and image a situation around the self-propelled robot (Kurosawa, see at least Figs. 1B, 5, col. 45, lines 53-67, col. 46, lines 1-19, the plurality of circumference cameras 70F/70B/70L/70R/80F/80B/80L/80R that are mounted around the excavator 100 to capture images around the excavator 100); and processing circuitry (Kurosawa, see at least Fig. 4, col. 13, lines 20-44, display control unit D1a/controller 30), the processing circuitry being adapted to: generate a self-propelled robot simulated image (Kurosawa, see at least Figs. 7, 8A, 8B, col. 21, lines 50-67, col. 22, lines 1-17, the excavator image 821/871 is a computer graphic simulating the excavator 100 viewed from the perspective virtual viewpoint/top viewpoint/bird’s eye view as illustrated in Figs. 8A-B, 11, 12); and generate a synthesized image displayed on the display (Kurosawa, see at least Figs. 5, 8A, 8B, synthesized image D1/820/870 displayed on the display device D1), the synthesized image including a circumference situation image captured by the plurality of circumference cameras in combination with the generated self-propelled robot simulated image (Kurosawa, see at least Figs. 6, 7, 8A, 8B, col. 16, lines 1-25, col. 21, lines 50-67, col. 22, lines 1-17, a synthesized image 820/870 is generated to display on the display D1, the synthesized image 820/870 including three-dimensional image representing the work area around the excavator 100 in combination with the excavator image 821/871 and the surrounding image 500/800/850 as illustrated in Figs. 5, 8A-B; col. 22, lines 27-37, “The three-dimensional image representing the work area around the excavator 100, including the road cone image 822, the utility pole image 823, and the fence image 824, etc., may be generated as a viewpoint conversion image, for example, by performing a known viewpoint conversion process based on the image captured by the imaging device 80”; col. 14, lines 12-28, “The surrounding image may be, for example, at least one output image (captured image) of the front camera 80F, the back camera 80, the left camera 80L, and the right camera 80R. Further, the surrounding image may be a viewpoint conversion image generated based on an output image of at least one of the front camera 80F, the back camera 80B, the left camera 80L, and the right camera 80R. The viewpoint conversion image may be, for example, a combination of a top view image viewing a relatively close area around the excavator 100 from directly above, and a horizontal image viewing a relatively far area around the excavator 100 from a horizontal direction with respect to the excavator 100”), wherein the synthesized image displayed on the display is further converted into images of at least three kinds of viewpoints, including a bird's eye image (Kurosawa, see at least Fig. 11, col. 45, lines 65-67, col. 46, lines 1-19, “Specifically, the setting screen 1100 displays a bird's-eye image viewed from directly above the excavator 100 (hereinafter, simply referred to as a “bird's-eye image”) (an example of an image representing a work area) that is generated by combining the images captured by the front camera 80F, the back camera 80B, the left camera 80L, and the right camera 80R after performing a known viewpoint conversion process”), an upper viewpoint image (Kurosawa, see at least Figs. 8B, col. 22, lines 57-67, col. 23, lines 3-27, the top viewpoint image of the surrounding area in combination with the top viewpoint 871 of the excavator 100 as illustrated in Fig. 8B), and a first person viewpoint image (Kurosawa, see at least Figs. 5, 8A, 8B, a first person viewpoint image 500/800/850), and wherein each of the images of the at least three kinds of viewpoints include a separate circumference situation image in combination with a separately generated self-propelled robot simulated image, the separately generated self-propelled robot simulated images being different from each other (Kurosawa, see at least Figs. 8A-B, col. 22, lines 7-17, col. 23, lines 3-27, the surrounding area images including the road cone image 822, the utility pole image 823, and the fence image 824, etc., in combination with the perspective viewpoint of the excavator 100 and the first person viewpoint image 800 as illustrated in Fig. 8A, and a top view image of the surrounding area in combination with top viewpoint of the excavator 100 and the first person viewpoint image 800 as illustrated in Fig. 8B; Fig. 11, col. 45, lines 53-67, col. 46, lines 1-19, the simulated image of the excavator 100 is disposed at the center of the bird’s eye viewpoint that is generated by combining the images captured by the front camera 80F, the back camera 80B, the left camera 80L, and the right camera 80R after performing a known viewpoint conversion process; col. 15, lines 37-50, cols. 17-18, the operator can remotely change the image display 500/800/850 to an image captured by another camera by pressing an image change switch on assist device 400 to display different viewpoint image). While Kurosawa does teach to generate computer graphic simulating the excavator including a posture of the arm/attachment, Kurosawa fails to explicitly teach self-propelled robot simulated image that imitates every moment of a posture of the self-propelled robot including a posture of the robotic arm. Sakuta teach, see at least Figs. 1A, 13-16, cols. 25-26, cols. 28-29, col. 30, lines 44-56, the controller is configured to generate graphic shapes 1431/1432/G1 of the excavator 100 that imitates every moment of a posture of the excavator 100 including a posture of the attachments 8/9/6 since the graphic shapes are animations that move in conjunction with the actual movement of the excavator (Sakuta, see at least col. 29, lines 39-51, “Specifically, the graphic shapes 1431 and 1432 may be generated to represent the actual orientation of the shovel 100. In this case, the graphic shapes 1431 and 1432 may be animations that move in conjunction with the actual movement of the shovel 100”) in combination with the camera image display part 1420 to synthesize image Gx1/Gx2 displayed on the display device DS as illustrated in Figs. 14-16. In view of Sakuta’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the system as taught by Kurosawa, (re claim 1) generate a self-propelled robot simulated image that imitates every moment of a posture of the self-propelled robot including a posture of the robotic arm, with reasonable expectation of success, since Sakuta teaches to generate graphic shapes of the excavator that are animations move in conjunction with the actual movement of the excavator in combination. This modification would allow the operator to visually identify whether the attachment is approaching an obstacle located in an outer region of the attachment (Sakuta, see at least col. 28, lines 32-52). PNG media_image1.png 560 833 media_image1.png Greyscale PNG media_image2.png 563 836 media_image2.png Greyscale PNG media_image3.png 555 768 media_image3.png Greyscale (Kurosawa, Figs. 8A-B, 11) PNG media_image4.png 575 870 media_image4.png Greyscale PNG media_image5.png 592 869 media_image5.png Greyscale PNG media_image6.png 601 874 media_image6.png Greyscale (Sakuta, Figs. 14-16) Regarding claim 2, the combination of Kurosawa and Sakuta teaches all the limitations of claim 1. The combination of Kurosawa and Sakuta further teaches wherein the robotic arm includes one or more motors that drive the one or more joints, respectively (Kurosawa, see at least Fig. 2, col. 3, lines 60-67, col. 4, lines 1-5, Sakuta, see at least Fig. 1A, col. 11, lines 59-67, hydraulic actuators driving the boom, bucket, and arm), and one or more rotation angle detectors that detect rotation angle(s) of the one or more motors, respectively (Kurosawa, see at least Figs. 1A, 3A-B, col. 7, lines 57-64, col. 11, lines 11-26, “a boom angle sensor S1, an arm angle sensor S2, a bucket angle sensor S3, a machine tilt sensor S4, a turning state sensor S5”; Sakuta, see at least Fig. 1A, col. 2, lines 64-67, col. 3, lines 1-7, “A boom angle sensor S1 is attached to the boom 4, and an arm angle sensor S2 is attached to the arm 5. A bucket angle sensor S3 is attached to the bucket 6. The excavation attachment may have a bucket tilt mechanism. The boom angle sensor S1, the arm angle sensor S2, and the bucket angle sensor S3 may be referred to as “orientation sensors””), and wherein the processing circuitry generates the self-propelled robot simulated image based on at least the rotation angle(s) detected by the one or more rotation angle detectors (Sakuta, see at least Fig. 14, col. 25, lines 58-67, col. 26, lines 1-21, lines 29-27, the generated graphic shapes 1431/1432/G1 of the excavator 100 is displayed such that the generated graphic shapes 1431/1432/G1 moved based on data related to the orientation of the shovel 100 and data related to the orientation of the excavation attachment, i.e. the pitch angle, the roll angle, the yaw angle (turning angle) of the upper turning body 3, a boom angle, an arm angle, and a bucket angle). Regarding claim 3, the combination of Kurosawa and Sakuta teaches all the limitations of claim 1. The combination of Kurosawa and Sakuta further teaches wherein, when the processing circuitry generates the synthesized image of the first-person viewpoint that is looked from the self-propelled robot (Kurosawa, see at least Figs. 1A, 5, col. 14, lines 29-39, the controller 30 is configured to generate the synthesized image 500 of a first-person viewpoint that is looked from the excavator 100 based on the captured images from the image device 80), the processing circuitry generates the self-propelled robot simulated image so that an arm imitation part that imitates at least a part of a portion of the robotic arm of the self-propelled robot, that is not displayed in the circumference situation image, is connected with a part of the robotic arm displayed in the circumference situation image (Kurosawa, see at least Figs. 1A, 5, col. 14, lines 29-67, col. 15, lines 1-36, the controller 30 is configured to generate the excavator 100 simulated image 510a representing the shape of the excavator 100 with the arm imitation part as described in Fig. 5), and the processing circuitry generates the synthesized image of the first person viewpoint so that the arm imitation part in the generated self-propelled robot simulated image is connected with the part of the robotic arm displayed in the circumference situation image (Kurosawa, see at least Figs. 1A, 5, col. 14, lines 29-67, col. 15, lines 1-36, the controller 30 is configured to generate the excavator 100 simulated image 510a representing the shape of the excavator 100 with the arm imitation part as described in Fig. 5). Regarding claim 5, the combination of Kurosawa and Sakuta teaches all the limitations of claim 1. The combination of Kurosawa and Sakuta further teaches wherein the processing circuitry generates the synthesized image in which an arm animation indicative of a change in the posture of the robotic arm of the self-propelled robot is displayed so as to be superimposed on the circumference situation image (Sakuta, see at least Fig. 15, col. 29, lines 39-51, col. 28, lines 32-52, the graphic shapes 1431 and 1432 in the synthesized image 1430 that including the arm/attachment of the excavator 100. The graphic shapes 1431 and 1432 are animations that move in conjunction with the actual movement of the excavator 100. The synthesized image 1430 is superimposed on the circumference situation image 1420). Regarding claim 6, the combination of Kurosawa and Sakuta teaches all the limitations of claim 1. The combination of Kurosawa and Sakuta further teaches wherein the processing circuitry determines whether the robotic arm interferes with an object around the self-propelled robot based on the circumference situation image captured by the plurality of circumference cameras, and the posture of the self-propelled robot (Kurosawa, see at least Figs. 7, 8A-B, col. 24, lines 23-62, the controller 30 configured to determine whether the excavator 100 is likely to contact the virtual wall VW based on the image data captured by the plurality of circumference cameras 70/80 and the posture of the excavator 100), and when the processing circuitry determines that the robotic arm interferes with the object, the processing circuitry outputs an interference warning signal (Kurosawa, see at least col. 24, lines 63-67, col. 24, lines 63-65, “The controller 30 may output an alarm when the excavator 100 is likely to contact the virtual wall VW so as not to contact the non-existent virtual wall VW”; col. 25, lines 1-14, “For example, when the distance between the virtual wall VW and the excavator 100 (the lower traveling body 1, the upper turning body 3, the attachment AT, or the like) falls below a predetermined threshold value, the controller 30 may output a control signal to the voice sound output device D2 to output an alarm sound. At this time, the controller 30 can identify the position of the excavator 100 in the setting coordinate system based on the positioning result of a positioning device such as the GNSS device mounted on the upper turning body 3 and determine the positional relationship with the virtual wall VW. Further, the controller 30 may output different alarm sounds of plural levels as the distance between the virtual wall VW and the excavator 100 decreases. Further, the controller 30 may output an alarm based on an information image through the display device D1”). Regarding claim 7, the combination of Kurosawa and Sakuta teaches all the limitations of claims 1 and 6. The combination of Kurosawa and Sakuta further teaches wherein the display displays an image indicative of an interference warning according to the outputted interference warning signal (Sakuta, see at least Fig. 16, col. 30, lines 15-56, a graphic shape 1436/1437/1438 indicates the position of a part of the excavator 100 having a possibility of contacting an object). Regarding claim 8, the combination of Kurosawa and Sakuta teaches all the limitations of claims 1 and 6. The combination of Kurosawa and Sakuta further teaches further comprising an interference warning informer that is disposed separately from the display and informs an interference warning according to the outputted interference warning signal (Kurosawa, col. 25, lines 1-14, “For example, when the distance between the virtual wall VW and the excavator 100 (the lower traveling body 1, the upper turning body 3, the attachment AT, or the like) falls below a predetermined threshold value, the controller 30 may output a control signal to the voice sound output device D2 to output an alarm sound. At this time, the controller 30 can identify the position of the excavator 100 in the setting coordinate system based on the positioning result of a positioning device such as the GNSS device mounted on the upper turning body 3 and determine the positional relationship with the virtual wall VW. Further, the controller 30 may output different alarm sounds of plural levels as the distance between the virtual wall VW and the excavator 100 decreases. Further, the controller 30 may output an alarm based on an information image through the display device D1”). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kurosawa (US 11926994 B2), in view of Sakuta et al. (US 11946223 B2, hereinafter “Sakuta”) as applied to claim 1 above, and further in view of Hoffman et al. (US 9283674 B2, hereinafter “Hoffman”). Regarding claim 4, the combination of Kurosawa and Sakuta teaches all the limitations of claim 1. The combination of Kurosawa and Sakuta fails to explicitly teach wherein the processing circuitry generates the synthesized image in which a scheduled moving route of the self-propelled robot is superimposed on the circumference situation image. Hoffman teaches, see at least Figs. 1, 2A, 4K, 4L, col. 16, lines 22-29, col. 17, lines 53-67, col. 18, lines 1-4, the controller is configured to superimpose drive lanes 118 and/or turn lanes 119 on a first-person viewpoint image 120 to indicating where the robot 200 is heading. In view of Hoffman’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the system as taught by Kurosawa and Sakuta, (re claim 4) wherein the processing circuitry generates the synthesized image in which a scheduled moving route of the self-propelled robot is superimposed on the circumference situation image, with reasonable expectation of success, since Hoffman teaches to superimpose drive lanes and/or turn lanes on a first-person viewpoint image to indicating where the robot is heading. This modification would allow to provide the operator a preview view that providing a perspective of a proposed robot action, such as a drive path (Hoffman, see at least col. 16, lines 22-29, col. 17, lines 53-67, col. 18, lines 1-4). PNG media_image7.png 532 643 media_image7.png Greyscale PNG media_image8.png 558 767 media_image8.png Greyscale (Hoffman, Figs. 2A, 4K) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim (US 20130211592 A1) teaches a tele-operation system and a method enabling a robot arm to move by following a motion of a motion of a hand of a user without an additional mechanical apparatus. Hoffmann et al. (US 9880553 B1) teaches a system and method to generate a simulation of a robot performing a selected action. The simulation is then rendered and overlaid on top of the 3D sensor data. Seifert et al. (US 20210041878 A1) teaches a system and method to generate a graphic depicting the robot that is rendered on top-down scene. The graphic of the robot may move to depict the instantaneous position of the robot while traversing to the target location. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRANG DANG whose telephone number is (703)756-1049. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571)272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRANG DANG/Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Jun 20, 2023
Application Filed
May 03, 2025
Non-Final Rejection — §103
Jun 11, 2025
Interview Requested
Jun 30, 2025
Applicant Interview (Telephonic)
Jun 30, 2025
Examiner Interview Summary
Aug 08, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103
Dec 02, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Feb 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576884
RIGHT-OF-WAY-BASED SEMANTIC COVERAGE AND AUTOMATIC LABELING FOR TRAJECTORY GENERATION IN AUTONOMOUS SYTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12559074
AIRCRAFT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12493302
LONGITUDINAL TRIM CONTROL MOVEMENT DURING TAKEOFF ROTATION
2y 5m to grant Granted Dec 09, 2025
Patent 12461529
ROBOT PATH PLANNING APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Nov 04, 2025
Patent 12429878
Systems and Methods for Dynamic Object Removal from Three-Dimensional Data
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
75%
With Interview (+30.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month