Prosecution Insights
Last updated: April 19, 2026
Application No. 18/081,989

METHOD FOR CONTROLLING AN AUTONOMOUS ROBOTIC TOOL

Final Rejection §102§103
Filed
Dec 15, 2022
Examiner
LEVY, MERRITT E
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Husqvarna AB
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
26 granted / 78 resolved
-18.7% vs TC avg
Strong +37% interview lift
Without
With
+36.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
56 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office action is in response to the communications filed on August 05, 2025. Claims 1-17 and 19 are currently pending, with Claims 1, and 15-16 being amended, with Claim 18 being canceled. Response to Amendments In response to Applicant’s amendments, filed August 05, 2025, the Examiner withdraws the previous 35 U.S.C. 101 rejection, maintains the previous claim interpretation, and maintains the previous 35 U.S.C. 102 and 103 rejections. Response to Arguments Applicant's arguments filed August 05, 2025, have been fully considered but they are not persuasive. Regarding Applicant’s arguments pertaining to the teachings of Kuffner relating to the modular autonomy unit being separate and have a connector that can transfer the set of instructions (see Page 5-6 of instant arguments), the Examiner is unpersuaded. Kuffner teaches that the controllers, processors, computers, and robotic tools may be separate entities located on the robot or elsewhere, and that the system may connect (i.e. removably connectable) to other devices to transfer instructions based on sensor data (see at least Col. 9 lines 33-46; Col. 10 lines 27-28; Col. 16 line 55- Col. 17 line 9; Col. 13 line 60- Col. 14 line 4 of Kuffner). As such, Kuffner teaches the features of the claims, as written. The Examiner is unpersuaded and maintains the corresponding rejections. The remaining arguments are essentially the same as those addresses above and/or below and are unpersuasive for essentially the same reasons. Therefore, the corresponding rejections are maintained. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a modular autonomy control unit” in Claims 1 and 13-19. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Regarding the limitation of a “modular autonomy control unit”, the instant specification at Paragraphs [0025] and [0037], at least states that “the robotic tool 1 is a robotic garden tool 1 comprising control means for driving and/or steering wheels 7 of the robotic garden tool 1 as well as controlling an implement 9 … the autonomy control unit 3 is connected to the robot control means 5 via an interface 11 …” and “the modular autonomy control unit 3 processes, using a processor 28, an error vector that is used to update calibration data in a memory 29 …”. Figure 1 of the instant drawings shows a vehicle capable of autonomous movement, which can connect with another component. The structure for the module autonomy control unit is a processor, storing instructions, capable of moving the robot or updating commands. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 13-17, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent No. 10,207,407 B1, to Kuffner (hereinafter referred to as Kuffner; previously of record). As per Claim 1, and similarly for Claims 15 and 16, Kuffner discloses the features of a method for controlling an autonomous robotic tool using a modular autonomy control unit (e.g. Col. 14 lines 14-25; Col. 15 lines 6-13; where the robotic system may be configured to operate autonomously, semi-autonomously, and/or by providing control to user(s) through various interfaces; and where the robotic device system may include sensors which received information about various features of the robotic system, such as the operation of extendable legs, arms, and/or other mechanical and/or electrical features of the robotic system) having an interface with the autonomous robotic tool (e.g. Col. 15 line 56 - Col. 16 line 4; Col. 17 lines 34-37; Figure 5; where the modules or robotic devices may be interconnected and/or may communicate to receive data or instructions from each other so as to provide a specific output or functionality for the robot (i.e. interface with another device); and where the robot can interface with actuators, legs, arms, etc.) and comprising a processor configured to control the autonomous robotic tool during operation (e.g. Col. 14 lines 26-29; where the robotic system (300) includes processor(s) (302), data storage (304), program instructions (306), controller (308), sensor(s) (310), mechanical and electrical components (314, 316)), characterized by the following steps the method comprising: the modular autonomy control unit transferring a set of test instructions to the autonomous robotic tool (e.g. Col. 17 lines 34-45; Col. 22 lines 40-48; Col. 24 lines 3-11; where the modules or robotic devices may receive data or instructions from each other; and where a given set of instructions stored in the library may be sent to the robotic device having a given configuration to perform operations relating to completing a task), the autonomous robotic tool carrying out a set of test actions in response to the test instructions (e.g. Col. 17 lines 34-45; Col. 20 line 61 - Col. 21 line 13; Col. 24 lines 3-11 and lines 57-67; where the modules or robotic devices may receive data or instructions from each other; and where a given set of instructions stored in the library may be sent to the robotic device having a given configuration to perform operations relating to completing a task; and where the library computing system provides the identified set of instructions to the robotic device to perform the task, and the robot may use the instructions to perform multiple operations to complete the task), the modular autonomy control unit detecting sensor input in response to the test actions (e.g. Col. 14 lines 27-29 and lines 45-55; Col. 15 lines 4-14; where the robotic system (300) includes sensor(s) (310), which may provide sensor data to the processor(s) (302) to allow for appropriate interaction of the robotic system with the environment; and where the sensors may measure the states of activities of various components of the robotic system, such as the operation of legs, arms, etc.), computing a corresponding error vector (e.g. Col. 6 lines 3-18; Col. 6 line 60 - Col. 7 line 4; Col. 15 lines 11-19; Col. 25 lines 7-19; where robotic devices may access the information stored within the library to use for identifying possible errors; and where the sensor data may enable the controller to determine errors in operation as well as monitor overall functioning of components of the robotic system), and updating calibration data based on the error vector (e.g. Col. 4 line 64 – Col. 5 line 4; Col. 6 lines 3-18; Col. 25 lines 39-48; where the library may provide instructions to the robotic devices for completing desired tasks or systems calibration), and the modular autonomy control unit controlling the robotic tool based on the calibration data (e.g. Col. 25 lines 7-19; where the computing system may modify the instructions prior to sending the instructions to the robotic device to process and use), wherein the modular autonomy control unit is a separate unit (e.g. Col. 10 lines 47-52; Col. 14 lines 26-36; Col. 16 line 55- Col. 17 line 3; where the system may include one or more controllers that may serve as a link between components of the system; where the robotic system (300) includes processor(s) (302), data storage (304), program instructions (306), controller (308), sensor(s) (310), and the components of the robotic system (200) may be positioned on multiple entities (i.e. separate); and where the robotic device may be configured to receive a mobile telephone, smartphone, tablet computer, etc. to function as the brains or control components of the robot, and may be considered a module of the robot) comprising a connector arrangement that is removably connectable to the interface (e.g. Col. 9 lines 33-46; Col. 16 line 55- Col. 17 line 9; where the device may be physically attached to the robot or coupled to another device to provide additional sensing capabilities; and where the robot client (118) may include one or more interfaces that allow various types of user-interface devices to be connect to the robot (118), and may include wired and wireless connection capabilities) to, when connected, transfer the set of test instructions (e.g. Col. 13 line 60- Col. 14 line 4; Col. 23 lines 39-41; where the communication interface (212) may enable wired or wireless access by robotic devices and provide instructions to the robotic devices) and detect the sensor inputs (e.g. Col. 10 lines 27-28; Col. 14 lines 60-62; Col. 18 lines 29-39; where the sensor(s) (310) may provide information indicative of the environment of the robotic device for the controller (308) and/or computing system, and may capture terrain information or location data of nearby objects, detect obstacles, etc.; and where the instructions can include information relating to how to pickup up an object; and where the processor(s) (202) may execute programs or processes as a result of receiving inputs, such as sensor data). As per Claim 2, Kuffner discloses the features of Claim 1, and Kuffner further discloses the features of wherein said test actions includes a movement of the autonomous robotic tool (e.g. Col. 24 lines 28-36 and lines 42-44; where the robotic device may be configured to open a door by turning a door knob (i.e. tool movement)). As per Claim 3, Kuffner discloses the features of Claim 2, and Kuffner further discloses the features of wherein, during the movement of the autonomous robotic tool, a position of at least one external object is detected, the position being included in sensor input (e.g. Col. 14 lines 60-62; Col. 18 lines 29-39; where the sensor(s) (310) may provide information indicative of the environment of the robotic device for the controller (308) and/or computing system, and may capture terrain information or location data of nearby objects, detect obstacles, etc.; and where the instructions can include information relating to how to pickup up an object). As per Claim 4, Kuffner discloses the features of Claim 2, and Kuffner further discloses the features of wherein the movement includes a turning of the autonomous robotic work tool (e.g. Col. 14 lines 60-62; Col. 18 lines 29-39; where the sensor(s) (310) may provide information indicative of the environment of the robotic device for the controller (308) and/or computing system, and may capture terrain information or location data of nearby objects, detect obstacles, etc.; and where the instructions can include information relating to how to pickup up an object). As per Claim 13, Kuffner discloses the features of Claim 1, and Kuffner further discloses the features of wherein the modular autonomy control unit is further adapted to detect an identity of an implement connected to the autonomous robotic tool (e.g. Col. 6 lines 53-56; Col. 5 lines 40-46; Col. 6 lines 23-29; where the robotic device may change configurations (e.g., add or change components) using simulation results to enable the robot to perform the task based on its configuration; and where the robotic devices sends configuration information (i.e. identifies its attachments or configuration) and environmental data to the library computing system, which then identifies and sends back the appropriate instructions to the robot). As per Claim 14, Kuffner discloses the features of Claim 1, and Kuffner further discloses the features of wherein the modular autonomy control unit receives sensor data from both the robotic work tool and sensors integrated with the autonomous robotic tool (e.g. Col. 14 lines 27-29 and lines 45-55; Col. 15 lines 4-14 and lines 28-37; where the robotic system (300) includes sensor(s) (310), which may provide sensor data to the processor(s) (302) to allow for appropriate interaction of the robotic system with the environment; and where the sensors may measure the states of activities of various components of the robotic system, such as the operation of legs, arms, etc.). As per Claim 17, Kuffner discloses the features of Claim 16, and Kuffner further discloses the features of wherein the modular autonomy control unit is configured to receive sensor data from sensors in the robotic work tool and comprises sensors integrated with the modular autonomy control unit (e.g. Col. 14 lines 27-29 and lines 45-55; Col. 15 lines 4-14 and lines 28-37; where the robotic system (300) includes sensor(s) (310), which may provide sensor data to the processor(s) (302) to allow for appropriate interaction of the robotic system with the environment; and where the sensors may measure the states of activities of various components of the robotic system, such as the operation of legs, arms, etc.). As per Claim 19, Kuffner discloses the features of Claim 16, and Kuffner further discloses the features of wherein the modular autonomy control unit is integrated with the autonomous robotic tool (e.g. Col. 14 lines 27-29 and lines 45-55; Col. 15 lines 4-14 and lines 28-37; where the robotic system (300) includes sensor(s) (310), which may provide sensor data to the processor(s) (302) to allow for appropriate interaction of the robotic system with the environment; and where the sensors may measure the states of activities of various components of the robotic system, such as the operation of legs, arms, etc.) (i.e. components or tools are integrated)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 5, 7-8, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10,207,407 B1, to Kuffner (hereinafter referred to as Kuffner; previously of record), in view of U.S. Patent Publication No. 2020/0225673 A1, to Afrouzi, et al (hereinafter referred to as Afrouzi; previously of record). As per Claim 5, Kuffner discloses the features of Claim 4, but Kuffner fails to disclose every feature of wherein the turning includes a 360 degrees turn of the autonomous robotic work tool. However, Afrouzi, in a similar field of endeavor, teaches an obstacle recognition method for autonomous robots, where the robot attempts to map the environment by rotating 360 degrees in its initial position (e.g. Paragraphs [0169], [0236]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the robotic operation libraries system of Kuffner, with the feature of turning the robot 360 degrees in the system of Afrouzi, in order to attempt to more accurately map the environment and localize itself (see at least Paragraph [0242] of Afrouzi). As per Claim 7, Kuffner discloses the features of Claim 3, but Kuffner fails to disclose every feature of wherein said at least one external object is a wall. However, Afrouzi, in a similar field of endeavor, teaches an obstacle recognition method for autonomous robots, where the robot can construct a map of the environment by detecting furniture, obstacles, static objects, moving objects, walls, ceilings, fixtures, perimeters, etc. (e.g. Paragraph [0195]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the robotic operation libraries system of Kuffner, with the feature of determining walls in the system of Afrouzi, in order to attempt to more accurately map the environment and localize itself (see at least Paragraph [0242] of Afrouzi). As per Claim 8, Kuffner discloses the features of Claim 3, but Kuffner fails to disclose every feature of wherein said at least one external object is at least one pole or beacon. However, Afrouzi, in a similar field of endeavor, teaches an obstacle recognition method for autonomous robots, where the robot acquires signals from a beacon located within the space (e.g. Paragraph [0154]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the robotic operation libraries system of Kuffner, with the feature of determining beacon locations in the system of Afrouzi, in order to attempt to more accurately map the environment and localize itself (see at least Paragraph [0242] of Afrouzi). As per Claim 10, Kuffner discloses the features of Claim 3, but Kuffner fails to disclose every feature of wherein at least a moving external object is detected, a position of the moving external object being included in sensor input. However, Afrouzi, in a similar field of endeavor, teaches an obstacle recognition method for autonomous robots, where the robot can construct a map of the environment by detecting furniture, obstacles, static objects, moving objects, walls, ceilings, fixtures, perimeters, etc.; and the robot may build a map indicating the position of the detected objects (e.g. Paragraphs [0195], [0217). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the robotic operation libraries system of Kuffner, with the feature of determining a moving object in the system of Afrouzi, in order to attempt to more accurately map the environment and localize itself (see at least Paragraph [0242] of Afrouzi). As per Claim 12, Kuffner, in view of Afrouzi, teaches the features of Claim 10, and Kuffner further discloses the features of wherein the moving external object is an auxiliary robotic tool (e.g. (Col. 15 lines 7-29; Col. 15 line 58 – Col. 16 line 3; where the robotic device system may include sensors which received information about various features of the robotic system, such as the operation of extendable legs, arms, and/or other mechanical and/or electrical features of the robotic system (i.e. auxiliary tools)). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10,207,407 B1, to Kuffner (hereinafter referred to as Kuffner; previously of record), in view of Japanese Publication No. 2001166827 A, to Ikeda, et al (hereinafter referred to as Ikeda; previously of record). As per Claim 6, Kuffner discloses the features of Claim 4, but Kuffner fails to disclose every feature of wherein the turning includes driving the autonomous robotic work tool along an 8-shaped path. However, Ikeda, in a similar field of endeavor, teaches a guide system for a mobile object, where a moving body (e.g., a robot) is guided along a travel route which has a figure-eight shape using recognition marks to improve accuracy (e.g. Paragraphs [0005], [0015]-[0016]; Figure 1). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the robotic operation libraries system of Kuffner, with the feature of having the robot conduct a figure-8-shaped path in the system of Ikeda, in order to provide better positioning accuracy identification (see at least Paragraph [0007] of Ikeda). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kuffner, in view of Afrouzi, as applied to Claim 8 above, and further in view of Japanese Patent Publication No. 2023/0219237 A1, to Galluzzo, et al (hereinafter referred to as Galluzzo; previously of record). As per Claim 9, Kuffner, in view of Afrouzi, teaches the features of Claim 8, but Kuffner, in view of Afrouzi, fails to teach every feature of wherein said at least one pole or beacon comprises an identifier in the group QRC, bar code, strobe light LED, and calibration image. However, Galluzzo, in a similar field of endeavor, teaches an autonomous mobile robot for picking up and putting away items, where fiducial markers such as 1D bar codes (i.e. barcode) and 2D bar codes (i.e. QR codes) may be used to tag and map location information of an object or shelf; and where the system may include a specific pattern of blinking or strobe safety lights to help the robot navigate; and where the system may comprise a calibration target which may be viewed by one or more sensors on the manipulator arm (620) to calibrate the sensor parameters are within specifications (e.g. Paragraphs [0050], [0058], [0128], [0155], [0208]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the robotic operation libraries system of Kuffner, in view of Afrouzi, with the feature of having different types of fiducial markers in the system of Galluzzo, in order to correlate position and identity information of an object (see at least Paragraph [0019] of Galluzzo). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kuffner, in view of Afrouzi, as applied to Claim 10 above, and further in view of U.S. Patent Publication No. 2013/0340560 A1, to Burridge, et al (hereinafter referred to as Burridge; previously of record). As per Claim 11, Kuffner, in view of Afrouzi, teaches the features of Claim 10, but Kuffner, in view of Afrouzi, fails to teach every feature of wherein the robotic tool is stationary while detecting the moving external object. However, Burridge, in a similar field of endeavor, teaches a method for a reconfigurable robotic manipulator, where the robot has a base section and a rotatable or movable section of a joint, and the base station remains stationary relative to the robot when the joint operates to move the other, rotatable section (e.g. Paragraph [0082]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the robotic operation libraries system of Kuffner, in view of Afrouzi, having the robot be stationary while a tool or joint is rotating in the system of Burridge, in order to improve the ability of operators and equipment to grip an object or surface (see at least Paragraph [0058] of Burridge). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: High, et al (U.S. 2016/0259341 A1), which teaches a having a robot interface with another mobile vehicle to navigate an area. Ratanaphanyarat, et al (U.S. 2017/0038772 A1), which teaches a modular robot that is capable of performing a variety of functions and tasks, and has an interchangeable attachment that may be attached and removed from the main body. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERRITT E LEVY whose telephone number is (571)270-5595. The examiner can normally be reached Mon-Fri 0630-1600. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MERRITT E LEVY/Examiner, Art Unit 3666 /HELAL A ALGAHAIM/SPE , Art Unit 3666
Read full office action

Prosecution Timeline

Dec 15, 2022
Application Filed
Apr 21, 2025
Non-Final Rejection — §102, §103
Aug 05, 2025
Response Filed
Sep 08, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601596
Estimation of Target Location and Sensor Misalignment Angles
2y 5m to grant Granted Apr 14, 2026
Patent 12603005
DRIVER ASSISTANCE MODULE FOR A MOTOR VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12594944
METHOD AND SYSTEM FOR VEHICLE DRIVE MODE SELECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12594960
NAVIGATIONAL CONSTRAINT CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583382
SYNCHRONIZED LIGHTING FOR ELECTRIC VEHICLES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+36.6%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month