DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of Claims
Claims 1-20 are currently pending and are being hereby examined herein. Claims 1-5, 7-15, 17, and 19-20 are amended.
Response to Amendment / Remarks
Any reference to the prior office action refers to the Non-Final Rejection dated 26 November 2025.
The objections from the prior office action are withdrawn in view of the amendments.
Applicant's arguments, filed 19 February 2026, with respect to the prior art rejections from the prior office action, have been fully considered but they are not persuasive. The references cited in the prior office action disclose, teach, and/or suggest the amendments to the claims.
In particular, U.S. Pub. No. 2024/0391038 (Srikanth et al., hereinafter, Srikanth) discloses determining classification of device condition (i.e., damage) and determining confidence in disassembly based on classifications (see at least [0027] and [0045]: “any suitable number of different categories may be maintained by the demanufacturing system… categories may refer to… device condition indicators (e.g., damaged, nonfunctional, components missing)”; “each agent is presented with information on a potential disassembly job, including information regarding the device to be disassembled (e.g., the specific categories applied to the device based on sensor data), and a final goal state (e.g., a specific set of disassembled parts). Each agent may then place a bid to take the job, where each bid specifies that agent's confidence level and estimated cost for successful completion”). Srikanth is a robotic system incorporating machine learning / artificial intelligence (see at least [0022] and [0030]) and is not a “conventional dismantling system” with merely “known devices and planned mechanical steps”; therefore, Applicant’s arguments regarding the lack of motivation to combine are not persuasive. The amended limitations are fully mapped to Srikanth and the other references cited in the prior office action below.
Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Priority / Effective Filing Date
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
If during prosecution, the Examiner believes there is not adequate support for any claim limitation in the other foreign applications (which, after initial review, appear similar in length and subject matter to the current application, but do not appear to directly correspond to the current application), an intervening reference (e.g., a publication with a public availability date between 29 November 2023 and 31 July 2024) may be applied. If this is to occur, Applicant can provide a rebuttal by providing a certified English translation of one or more foreign applications showing support for said claim limitation that pre-dates the art applied, and if Examiner is persuaded by the evidence, the rejection will be withdrawn.
Application Number
Filing Date
Date Received by USPTO
KR 10-2023-0169608
29 November 2023
27 August 2023
KR 10-2023-0129609
29 November 2023
27 August 2023
KR 10-2023-0169610
29 November 2023
27 August 2023
KR 10-2023-0169611
29 November 2023
27 August 2023
KR 10-2023-0169612
29 November 2023
27 August 2023
KR 10-2023-0173261
4 December 2023
27 August 2023
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 19 February 2026 has been considered by the examiner.
Claim Objections
The claims are objected to for the following informalities:
Claim 6 should be amended, because of the amendments to Claim 1, to clarify if “at least one unit action” is the same between the two claims. There are a finite number of interpretations, so no rejection under 35 U.S.C. 112(b) has been issued.
Claim 16 should be amended, because of the amendments to Claim 11, to clarify if “at least one unit action” is the same between the two claims. There are a finite number of interpretations, so no rejection under 35 U.S.C. 112(b) has been issued.
Appropriate corrections are required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 6-13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Srikanth in view of U.S. Pub. No. 2024/0372167 (hereinafter, Kim).
Regarding Claim 1, Srikanth discloses A method for operating a robot system that controls operation of at least one robot apparatus to perform a task (see at least [0003], FIG. 1, and FIG. 2), the method comprises:
receiving, by at least one processor in the robot system, a control signal which indicates to dismantle a first component of an electronic device (see at least [0014]-[0015], [0020], [0025], [0062], and FIG. 1: “Once the HDD is classified and the job criteria are defined, it is assigned to one or more robotic disassembly agents for demanufacturing (e.g., according to the bidding process described above)”);
obtaining sensing data associated with the electronic device from at least one sensor (see at least [0024], [0028], [0114]-[0115], and FIG. 2: “Method 200 includes, at 202, receiving a target electronic device for disassembly. This generally refers to the computerized disassembly system detecting presence of a new electronic device—e.g., through a change in sensor data (such as an increase in weight measured by a scale, detection of a new object or barcode by a camera), and/or via manual human input that a new device has been delivered for disassembly. The target electronic device may be transported to machines of the computerized demanufacturing system for sensor data collection and/or disassembly in any suitable way. For instance, the target electronic device may be transported using a suitable conveyor system, moved by a robotic arm (e.g., using a gripper, suction, or magnet tool), or carried by a human, as non-limiting examples.”);
identifying, based on the sensing data, whether at least a portion of the electronic device is damaged, and determining whether to perform the task based on an extent of the damage (see at least [0027] and [0041]-[0045]: “It will be understood that any suitable number of different categories may be maintained by the demanufacturing system, and any suitable number of categories may be applied to any particular target electronic device (e.g., one category, zero categories, two or more categories)”; “As non-limiting examples, categories may refer to different device types (e.g., hard drive, solid state drive, non-volatile storage, battery-backed RAM storage), different device form factors (e.g., 3.5″, 2.5″, surface mount chip), different device manufacturers, specific device identifiers (e.g., serial numbers, product names, unique hardware identifiers, batch numbers), device condition indicators (e.g., damaged, nonfunctional, components missing), accessory presence (e.g., device includes a hot-swap caddy), etc.”; “each agent is presented with information on a potential disassembly job, including information regarding the device to be disassembled (e.g., the specific categories applied to the device based on sensor data), and a final goal state (e.g., a specific set of disassembled parts). Each agent may then place a bid to take the job, where each bid specifies that agent's confidence level and estimated cost for successful completion”);
inputting the sensing data into an artificial intelligence engine and calculating a similarity between the sensing data and at least one trained case to determine whether the task is performable (see at least [0030]-[0031] and [0053]: “a supervised learning system is used to categorize the target electronic device. For instance, in some examples, the one or more device categories are applied to the target electronic device by a machine learning model previously trained to classify electronic devices as belonging to one or more electronic device categories based on input sensor data”; “The output of the classification process is in some examples expressed as a probability distribution of the device against each category. For instance, a category may be applied to the target electronic device if the probability distribution for that category exceeds a confidence threshold. If no categories exceed the confidence threshold, then the system in some examples prompts a human operator to select a category for the target electronic device, and/or to create a new category and set the target electronic device as the first example for the new category”; “As examples, a job may be assigned to a human operator upon determining that a particular device cannot be classified with sufficiently high confidence, there are no agents that can complete the job for less than the maximum cost, an unsuccessful disassembly attempt was made, and/or an error state (such as a security or safety issue) is detected”),
identifying a predetermined point on the electronic device based on sensing data (see at least [0063], [0095], [0114]-[0115], [0118], and [0138]-[0139]: “the agent identifies a lid having an approximate center of mass at a detected coordinate”);
estimating a pose of the electronic device based on a locational relationship between the identified predetermined point and the at least one robot apparatus (see at least [0063], [0114]-[0115], and [0138]-[0139]: “As the hard drive is raised, computer vision is used to calculate the angles of the lid relative to the puck's surface, and is used as feedback to refine future suction attach points as the center of mass is learned over time. In some cases, the drive would be lowered, and the suction point would be repositioned and attempted again until the lid is sufficiently parallel to the puck.”);
adjusting a position of the at least one robot apparatus or the electronic device such that the locational relationship between the predetermined point and the at least one robot apparatus satisfies a predetermined criterion (see at least [0114]-[0115] and [0138]-[0139]: “During demanufacturing, in some examples, the robotic disassembly agent determines the best initial state of the device (e.g., in terms of position and orientation), and may use one or more effectors to move the device into its initial state. In some examples, the initial state is a known initial state—e.g., derived from prior disassembly of similar devices by the same or other agents, and/or specified by a human operator. There may be more than one valid initial state, in which case the agent may choose the initial state in any suitable way (e.g., according to a probability distribution based on prior successes at lowest cost).”);
identifying, based on sensing data, a position of at least one of a plurality of fastening members for securing the first component (see at least [0118], [0138]-[0140], and FIG. 6: “the robotic disassembly agent may use suitable sensor data (e.g., a computer vision system) to classify one or more parts of the target electronic device as interactable elements (e.g., screws, clips, welded seams, solder joints) that can potentially be manipulated by effectors of the agent”; “Based on the generated unscrewing sequence, the agent attempts to remove the detected fasteners. Upon detecting that a screw is stuck to the toolhead (e.g., via a trained computer vision system), then the agent may perform a “remove from toolhead” action-e.g., using a gripper tool, magnet tool, or other suitable tool. Upon detecting that a particular screw is not turning, the agent attempts to determine whether the screw is stripped, and/or if the bit size is incorrect. Screw stripping may in some cases be detected via computer vision, as one example”);
determining a dismantling sequence for the plurality of fastening members, and identifying a position of a first fastening member among the plurality of fastening members to set the first fastening member to a first target location (see at least [0111] and [0138]-[0140]: “Fastener Pattern: A “fastener pattern” may refer to a group of one or more fasteners. In some examples, the system calculates a centroid between all fasteners of the pattern and sums the distance from the centroid to each fastener. Stored information may include coordinates of the centroid, the total number of fasteners in the pattern, the best discovered optimal fastener removal action sequence based upon prior agent learnings, and the sum of the centroid-fastener distances. This can be used to find similar fastener patterns and match historically successful unscrew sequences”);
determining a first trajectory for a first end effector to reach a first location corresponding to the first target location (see at least [0137]-[0141]: completion of the described actions requires a trajectory);
controlling the first end effector to perform a first task of dismantling the first fastening member from the electronic device at the first location, wherein the first task includes at least one unit action (see at least [0133], [0139]-[0141], and FIG. 6: “Based on the generated unscrewing sequence, the agent attempts to remove the detected fasteners. Upon detecting that a screw is stuck to the toolhead (e.g., via a trained computer vision system), then the agent may perform a “remove from toolhead” action-e.g., using a gripper tool, magnet tool, or other suitable tool. Upon detecting that a particular screw is not turning, the agent attempts to determine whether the screw is stripped, and/or if the bit size is incorrect. Screw stripping may in some cases be detected via computer vision, as one example. The agent may attempt to replace the tool bit with a larger bit or a different type of bit to determine whether this allows the screw to be turned and removed. In this example, fastener removal continues until all detected fasteners are removed.”);
stopping the operation of the at least one robot apparatus when the first task is not completed after performing the at least one unit action beyond a predetermined threshold (see at least [0053]-[0054] and [0140]: “As examples, a job may be assigned to a human operator upon determining that a particular device cannot be classified with sufficiently high confidence, there are no agents that can complete the job for less than the maximum cost, an unsuccessful disassembly attempt was made, and/or an error state (such as a security or safety issue) is detected. In general, the computerized demanufacturing system may receive input (e.g., feedback, explicit disassembly instructions) from human operators at any time—e.g., before, during, or after execution of a disassembly job. For example, part way through a disassembly job, a human operator may provide explicit instructions that override the current disassembly sequence being run by a robotic disassembly agent”; “Upon detecting that a particular screw is not turning, the agent attempts to determine whether the screw is stripped, and/or if the bit size is incorrect”; “The agent may attempt to replace the tool bit with a larger bit or a different type of bit to determine whether this allows the screw to be turned and removed”; “Any and all applicable and reasonable safety precautions will be implemented to ensure the safety of human workers interacting with robotic disassembly agents, and/or other aspects of the system”);
training the artificial intelligence engine with a failure case of the first task (see at least [0032] and [0053]-[0054]: “As examples, a job may be assigned to a human operator upon determining that a particular device cannot be classified with sufficiently high confidence, there are no agents that can complete the job for less than the maximum cost, an unsuccessful disassembly attempt was made, and/or an error state (such as a security or safety issue) is detected. In general, the computerized demanufacturing system may receive input (e.g., feedback, explicit disassembly instructions) from human operators at any time—e.g., before, during, or after execution of a disassembly job. For example, part way through a disassembly job, a human operator may provide explicit instructions that override the current disassembly sequence being run by a robotic disassembly agent.”; “It will be understood that, in any case where human operators provide input, the system may monitor the success/failure that results from implementing the input, and incorporate the human input into future disassembly tasks (e.g., human input may be used for retraining and refining machine-learning models)”);
after dismantling each of the plurality of fastening members, setting a second target location based on sensing data and determining a second trajectory for a second end effector to reach a second location corresponding to the second target location (see at least [0137]-[0142]: completion of the described actions (e.g., “The agent then makes another attempt to remove the PCB, which is successful”, “The agent next attempts to remove the lid using a pry action, which is successful.”) requires a trajectory and second target location); and
controlling the second end effector to perform a second task of dismantling the first component at the second location (see at least [0016], [0133], and [0137]-[142]: “The agent then makes another attempt to remove the PCB, which is successful”; “The agent next attempts to remove the lid using a pry action, which is successful.”; “A robotic disassembly agent uses various effectors (e.g., tools such as screwdrivers, clamps, suction cups, desoldering tools) to manipulate various interactable elements of an electronic device to carry out a sequence of one or more disassembly steps.”; “During demanufacturing, in some examples, the robotic disassembly agent determines the best initial state of the device (e.g., in terms of position and orientation), and may use one or more effectors to move the device into its initial state. In some examples, the initial state is a known initial state—e.g., derived from prior disassembly of similar devices by the same or other agents, and/or specified by a human operator. There may be more than one valid initial state, in which case the agent may choose the initial state in any suitable way (e.g., according to a probability distribution based on prior successes at lowest cost)”).
Srikanth does not explicitly disclose an electronic device being dismantled is a battery.
Kim, in the same field of robot controls, and therefore analogous art, teaches an electronic device being dismantled is a battery (see at least [0007]: “automated battery disassembly system”).
It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to apply the system, method, and non-transitory computer-readable medium of Srikanth disclosed for a generic electronic device to the specific electronic device of a battery, with the sub-components of battery (e.g., cells and modules) of Kim “to effectively recycle the battery pack of the electric vehicle, and … to prevent or minimize exposure of workers to hazardous substances that may be generated during the disassembly of the battery” (see at least Kim [0081]). Furthermore, Srikanth intended for their system, method, and non-transitory computer-readable medium to be applied to multiple electronic devices (see at least [0014]-[0015]).
Regarding Claim 2, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Srikanth discloses wherein the first target location is determined by identifying, based on sensing data, at least one of a position coordinate of the first fastening member, a direction to the first fastening member, and a distance to the first fastening member (see at least [0111] and [0138]-[0140]: “Fastener Pattern: A “fastener pattern” may refer to a group of one or more fasteners. In some examples, the system calculates a centroid between all fasteners of the pattern and sums the distance from the centroid to each fastener. Stored information may include coordinates of the centroid, the total number of fasteners in the pattern, the best discovered optimal fastener removal action sequence based upon prior agent learnings, and the sum of the centroid-fastener distances. This can be used to find similar fastener patterns and match historically successful unscrew sequences.”).
Regarding Claim 3, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Srikanth discloses further comprising: identifying, by the at least one processor, a position of each of the plurality of fastening members based on the sensing data (see at least [0095]-[0111] and [0138]-[0140]: “Based on the generated unscrewing sequence, the agent attempts to remove the detected fasteners. Upon detecting that a screw is stuck to the toolhead (e.g., via a trained computer vision system), then the agent may perform a “remove from toolhead” action-e.g., using a gripper tool, magnet tool, or other suitable tool. Upon detecting that a particular screw is not turning, the agent attempts to determine whether the screw is stripped, and/or if the bit size is incorrect. Screw stripping may in some cases be detected via computer vision, as one example. The agent may attempt to replace the tool bit with a larger bit or a different type of bit to determine whether this allows the screw to be turned and removed. In this example, fastener removal continues until all detected fasteners are removed.”); and selecting the first fastening member of the plurality of fastening members based on predetermined selection criteria (see at least [0111] and [0138]-[0140]: “Fastener Pattern: A “fastener pattern” may refer to a group of one or more fasteners. In some examples, the system calculates a centroid between all fasteners of the pattern and sums the distance from the centroid to each fastener. Stored information may include coordinates of the centroid, the total number of fasteners in the pattern, the best discovered optimal fastener removal action sequence based upon prior agent learnings, and the sum of the centroid-fastener distances. This can be used to find similar fastener patterns and match historically successful unscrew sequences.”).
Regarding Claim 6, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Srikanth discloses wherein the first task comprises at least one unit action for dismantling the first fastening member and the at least one unit action comprises an operation of rotating the first end effector to dismantle the first fastening member (see at least [0016], [0092], [0117], and [0138]-[0141]: the screwdriver unscrews until the fasteners are removed).
Regarding Claim 7, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Srikanth discloses further comprising: identifying, by the at least one processor, based on the sensing data, whether all of the plurality of fastening members for securing the first component to the battery have been dismantled (see at least [0118], [0133], [0138]-[0141], and FIG. 6: “ In some cases, success or failure of an initial disassembly sequence may be confirmed by a human operator, and/or via suitable sensor data (e.g., a computer vision system trained to recognize device components that are intact and successfully separated)”; “In some examples, the robotic disassembly agent is configured to detect whether the overall disassembly job is successful, and/or whether any sub-goals of the disassembly job are successful—e.g., via a machine learning model trained to recognize components of the electronic device that have been successfully disassembled.”; “fastener removal continues until all detected fasteners are removed”).
Regarding Claim 8, the Srikanth and Kim combination teaches the limitations of Claim 7. Furthermore, Srikanth discloses further comprising: controlling, by the at least one processor, the at least one robot apparatus to release the first end effector and mount the second end effector (see at least [0016], [0019], [0067]-[0095], [0115], [0121], [0136], [0167]-[0169], and FIG. 5: “In some examples, one robotic arm supports two or more different tools that are selectable. For instance, the same arm may have multiple different integrated tools that can be switched between or otherwise used independently. As another example, different tools may be removably attachable to the arms-e.g., depending on the current task, a tip, toolhead, attachment, or other aspect of a robotic arm may be replaced or equipped to perform the task”; “Analysis of images from the job may also inform a tool change to a likely screwdriver bit as it waits for the hard drive.”) and Kim teaches (with the motivation to combine being the same as Claim 1) further comprising: controlling, by the at least one processor, the at least one robot apparatus to release the first end effector and mount the second end effector (see at least [0042]: “the robot device 130 may disassemble the battery pack using a robot arm and various tools (e.g., a bolt driver, laser cutter, tongs, saw, etc.) mounted on the robot arm. A laser cutting tool, a bolting processing tool, a cable cutting tool, a cover handling tool, a scrap handling tool, a cable handling tool, and the like are provided around the robot device 130. The robot device 130 may replace the tool coupled to the robot arm as needed.”).
Regarding Claim 9, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Srikanth discloses wherein the first component comprises a cover (see at least [0061], [0116], and [0142]: lid) and Kim teaches (with the motivation to combine being the same as Claim 1) wherein the first component comprises a cover (see at least [0057] and FIG. 4: “separating an upper cover from the battery pack”).
Regarding Claim 10, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Kim teaches (with the motivation to combine being the same as Claim 1) wherein the first component comprises a module of the battery (see at least [0009]).
Regarding Claim 11, this claim is substantially similar to Claim 1 and most limitations are rejected for the same reasons as Claim 1. Additionally, Srikanth discloses A robot system which controls the operation of at least one robot apparatus implemented to perform a task based on a control signal (see at least [0020]-[0021], FIG. 1, and FIG. 8), comprising:
a memory (see at least [0156] and FIG. 8: “Storage subsystem 804 includes one or more physical devices configured to temporarily and/or permanently hold computer information such as data and instructions executable by the logic subsystem”); and
at least one processor electronically connected to the memory (see at least [0155] and FIG. 8: “Logic subsystem 802 includes one or more physical devices configured to execute instructions”);
wherein the at least one processor is configured to…(see at least [0153]-[0156] and FIG. 8).
Regarding Claim 12, this claim is substantially similar to Claim 2 and rejected for the same reasons as Claim 2.
Regarding Claim 13, this claim is substantially similar to Claim 3 and rejected for the same reasons as Claim 3.
Regarding Claim 16, this claim is substantially similar to Claim 6 and rejected for the same reasons as Claim 6.
Regarding Claim 17, this claim is substantially similar to Claim 7 and rejected for the same reasons as Claim 7.
Regarding Claim 18, this claim is substantially similar to Claim 8 and rejected for the same reasons as Claim 8.
Regarding Claim 19, the Srikanth and Kim combination teaches the limitations of Claim 11. Furthermore, Kim teaches (with the motivation to combine being the same as Claim 1) wherein the first component comprises at least one of a cover of the battery, a module of the battery, or a cell of battery (see at least FIG. 4).
Regarding Claim 20, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Srikanth discloses A non-transitory computer readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform the method (see at least [0153]-[0156] and FIG. 8).
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Srikanth in view of Kim in further view of U.S. Pub. No. 2018/0029226 (Dani et al., hereinafter, Dani).
Regarding Claim 4, the Srikanth and Kim combination teaches the limitations of Claim 1. Furthermore, Srikanth discloses machine learning for sequence generation (see at least [0022], [0120]-[0123], [0139]-[0141], and FIG. 5), but the Srikanth and Kim combination does not explicitly teach wherein the first trajectory is determined by inputting the sensing data into the artificial intelligence engine and obtaining a set of dynamic parameters for reaching the first location from at least one layer of the artificial intelligence engine.
Dani, in the same field of robot controls, and therefore analogous art, teaches wherein the first trajectory is determined by inputting the sensing data into the artificial intelligence engine and obtaining a set of dynamic parameters for reaching the first location from at least one layer of the artificial intelligence engine (see at least [0016], Fig. 1, and Fig. 3: “the trajectory generation unit 140 may observe a state of the robot and generate a trajectory enabling the robot to perform the task, based on the NN or GMM; and the motion planning unit 150 may apply the trajectory to the robot 110, thus causing the robot 110 to perform the task”).
It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to combine the Srikanth and Kim combination with the trajectory generation of Dani so the robot can be trained by a non-expert operator (see at least Dani [0003]).
Regarding Claim 14, this claim is substantially similar to Claim 4 and rejected for the same reasons as Claim 4.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Srikanth in view of Kim in further view of U.S. Pub. No. 2018/0009109 (Norton et al., hereinafter, Norton).
Regarding Claim 5, the Srikanth and Kim combination teaches the limitations of Claim 1. Srikanth further discloses that a target location is the first fastening member. The Srikanth and Kim combination does not explicitly teach wherein the controlling of the first task to be performed further comprises: determining whether to perform the first task based on a first condition, the first condition being set based on a distance or orientation between the first end effector and the first fastening member.
Norton, in the same field of robot controls, and therefore analogous art, teaches wherein controlling of the first task to be performed further comprises: determining whether to perform the first task based on a first condition, the first condition being set based on a distance or orientation between the first end effector and the target location (see at least [0081]-[0087]: “At block 62, the method includes determining an action for the second robot 22 using the determined position and/or orientation of the second robot 22. For example, an operator may use the user input device 14 to define a target location on a graphical representation of the gas turbine engine 11 displayed by the display 16 (the graphical representation may be generated from the stored data 33 of the structure of the gas turbine engine 11). The controller 12 may compare the position and/or orientation of the target location with the determined position and/or orientation of the second robot 22 to determine an action for the second robot 22.”).
It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to combine the Srikanth and Kim combination with the determinations of Norton to “enable two or more robots to collaborate and complete an action” (see at least Norton [0107]-[0108]).
Regarding Claim 15, this claim is substantially similar to Claim 5 and rejected for the same reasons as Claim 5.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRA ROBYN MORFORD whose telephone number is (571)272-6109. The examiner can normally be reached Monday - Friday 8:00 AM - 4:00 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.R.M./Examiner, Art Unit 3658
/JASON HOLLOWAY/Primary Examiner, Art Unit 3658