DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Drawings
The drawings are objected to because the drawings do not include reference characters that refer to the written specification. The examiner suggests adding corresponding reference characters to both the specification and the figures that identify at least the method steps/modules. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claims 1-7 are objected to because of the following informalities:
For improved clarity, the examiner suggests uncapitalizing the words following the roman numerals. For example, “(i) creation
For improved clarity, the examiner suggests amending the start of each claim as follows:
Claim 1: “A control
Claims 2-7: “The control platform
Based on context, the examiner assumes the “historical processes, product, environment and resources data located in a database” means ‘historical data’ from processes, product, environment and resources in a database rather than ‘historical processes’ data, product data, environment data, and resources data located in a database. Assuming the interpretation is correct, an amendment to claim 1 is recommended to improve clarity: “(ii) application data from processes, product, environment and resources
For improved clarity, the examiner suggests amending claim 1 as follows: “(iii) control physical media reference data and historical data combined for automatic generation of trajectories for autonomous systems;”
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “anti-collision subsystem” in claims 1 and 6; “self-protection module” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The specification describes the structure of the “anti-collision subsystem” as comprising a plurality of sensors according to claim 5 and paragraph [0026]. However, the specification merely repeats “self-protection module” in paragraphs [0016] and [0026] and does not provide sufficient structure for the “self-protection module.”
Due to the 112(b) rejection, the “self-protection module” cannot be interpreted under 112(f) because there is no supporting structure present in the specification.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 recites “a self-protection module in monitoring of automatic trajectories of the autonomous systems.” The specification lacks structure that is clearly linked to the “self-protection module” interpreted under 112(f). The specification recites: ([0026]) “for the autonomous systems not to collide due to alterations in the environment wherein they will execute their trajectories, an anti-collision subsystem acts in this trajectory monitoring of the autonomous systems by means of a self-protection module. This anti-collision subsystem comprises a plurality of sensors such as ultrasonic sensors, Lidars and/or cameras, for example, and which monitor the environment forming a volume allowed at each point of the trajectory.” Accordingly, the specification describes the structure of the anti-collision subsystem as sensors and describes that the self-protection module performs trajectory monitoring of the autonomous systems. However, there is no description on what the module is. Is it a processor, a sensor, software, or something else? Because there is no disclosure of adequate structure to perform the claimed function, the specification does not convey with reasonable clarity to those skilled in the art that the applicant had possession of the claimed invention. Accordingly claim 1 is rejected under 35 U.S.C. 112(a). Claims 2-7 are also rejected for being dependent on a rejected base claim and failing to cure the deficiencies.
Claim 1 recites “(v) Use of Artificial Intelligence (AI) algorithms for active control and modification of the automatic trajectories of the autonomous systems.” The description lacks description for ‘how’ the AI algorithms are used for active control and modification of the automatic trajectories. Paragraph [0002] provides an example of what the AI algorithm is (“using artificial intelligence, for example, artificial neural networks for system feedback and decision making in the autonomous execution of activities.”), and paragraphs [0027-0030] describe what the AI algorithms perform (e.g., “the AI can generate an original trajectory or make the monitoring and correction of the generated trajectories”). Paragraph [0028] further describes that for the trajectory generation/correction, “the AI is trained in simulation environment (Digital twin of the product/process). Algorithms and AI techniques, for example Reinforcement Learning, are used for learning to make decisions on the motion of each degree of freedom of the systems which allow building the trajectory up to the final objective without colliding.” This description of reinforcement learning is only recited at a high-level (i.e., it is trained in a simulation and learns to make decisions on motions to make a trajectory without colliding) and does not provide adequate understanding to one of ordinary skill in the art ‘how’ the AI algorithms work. “Learning to make decisions on the motion” is not a sufficient description on how the AI algorithms perform active control and modification of the automatic trajectories using reinforcement learning. Claims 2-7 are also rejected because they do not resolve the deficiencies of claim 1.
Claim 1 recites “the control platform comprising consumption of process, product, environment and resources data” and “(vi) Control and monitoring of resources.” The description lacks description for what the resources are. Resources is a very broad term and its not understood to one of ordinary skill in the art as to what resource data comprises and what is being controlled in the step of “(vi) Control and monitoring of resources.” Paragraph [0030] recites “the resources and parameters of the autonomous systems during the execution of the generated trajectories are controlled and monitored, with the purpose of ensuring the integrity of the product and of the resources used.” Paragraph [0035] recites “the generation of data for history, which is subsequently used for generating new, improved trajectories, provides greater precision in the use of resources and better product quality.” Accordingly, it is described that the resources are related to a trajectory. However, the specification repeats the term “resources” numerous times without providing any examples or definitions. Accordingly claim 1 is rejected under 112(a) as lacking written description. Claims 2-7 are also rejected because they do not resolve the deficiencies of claim 1.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 6 recites “wherein the anti-collision subsystem further comprises a warning area for recalculation of trajectory and an immediate stop area.” It is unclear how the anti-collision subsystem, which is interpreted under 112(f) as comprising sensors, comprises ‘areas.’ Is the warning area and immediate stop area somehow part of the structure of the anti-collision subsystem or does the anti-collision subsystem monitor the claimed areas? The claim language makes it appear that the areas are part of the subsystem and it is unclear how that would be possible. For examination purposes, the claim is interpreted wherein the anti-collision subsystem monitors an environment that includes a warning area for recalculation of trajectory and an immediate stop area.
Claim 7 recites the limitation "learning the modifications in the trajectories and behavior of the autonomous systems." There is insufficient antecedent basis for the “behavior” in the claim. It is unclear what the “behavior” is referencing because a behavior of the autonomous systems is not mentioned in the parent claims. How is the behavior different from the trajectories? For examination purposes, the claim is interpreted as reciting “learning the modifications in the trajectories and a behavior of the autonomous systems.
Claim 1 recites “a self-protection module in monitoring of automatic trajectories of the autonomous systems.” Claim limitation “self-protection module” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification recites: ([0026]) “for the autonomous systems not to collide due to alterations in the environment wherein they will execute their trajectories, an anti-collision subsystem acts in this trajectory monitoring of the autonomous systems by means of a self-protection module. This anti-collision subsystem comprises a plurality of sensors such as ultrasonic sensors, Lidars and/or cameras, for example, and which monitor the environment forming a volume allowed at each point of the trajectory.” Accordingly, the specification describes the structure of the anti-collision subsystem as sensors and describes that the self-protection module performs trajectory monitoring of the autonomous systems. However, there is no description on what the module is. Is it a processor, a sensor, software, or something else? Therefore, the claim is indefinite because the structure of the self-protection module is unclear and claim 1 is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Claims 2-7 are also rejected for being dependent on a rejected base claim and failing to cure the deficiencies.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4 and 7 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Oleynik (US 20190291277 A1).
Regarding Claim 1,
Oleynik teaches
Control platform for autonomous systems, (“Systems and methods are provided for operating universal robotic assistant systems.” See at least [0008])
the control platform comprising consumption of process, product, environment and resources data of a digital model for automatic generation of trajectories for autonomous systems, (“A minimanipulation library provides a large suite of higher-level sensing-and-execution sequences that are common building blocks for complex tasks.” See at least [0041], wherein the minimanipulation library is the data of a digital model.; “a database library structure 972 of minimanipulation objects for use in the standardized robotic kitchen. The database library structure 972 shows several fields for entering and storing information for a particular minimanipulation, including (1) the name of the minimanipulation, (2) the assigned code of the minimanipulation, (3) the code(s) of standardized equipment and tools associated with the performance of the minimanipulation, (4) the initial position and orientation of the manipulated (standard or non-standard) objects (ingredients and tools), (5) parameters/variables defined by the user (or extracted from the recorded recipe during execution), (6) sequence of robotic hand movements (control signals for all servos) and connecting feedback parameters (from any sensor or video monitoring system) of minimanipulations on the timeline.” See at least [0535]; See at least [0786], describing metrics such as time required and energy-expended corresponding to stored mini-manipulation action primitive (AP) components, wherein the metrics are historical resource data in a database.; “The method of mini-manipulation command generation for one or both the macro- or micro-manipulation subsystems, comprises receiving a high-level task execution command, identifying individual subtasks which will be mapped to the applicable robotic subsystems, generation of individual performance criteria and measurable success end-state criteria for each of the above subtasks, selection of one or more in either a stand-alone or combination, of the most suitable action primitive candidates, evaluation of these action primitive alternatives for maximizing or minimizing such measures as execution-time, energy expended, robot reachability, collision avoidance or any other task-critical criteria, generation of either or both macro- and/or micro-manipulation subsystem trajectories in one or more motion spaces.” See at least [1289])
the control platform for autonomous systems further comprising at least one processor and/or processing circuit performing operations comprising: (“a computer device, as shown in 4324, on which computer-executable instructions to perform the methodologies discussed herein may be installed and run. … The example computer system 4324 includes a processor 4326 (e.g., a central processing unit (CPU)” See at least [1271])
(i) Creation of physical media reference data; (“the position data and the image data are obtained from the one or more sensors, wherein the one or more sensors comprises at least one of a navigation system and one or more image capturing devices. In some embodiments, detecting the one or more objects is based on at least one of the type of the current environment, the environment data corresponding to the current environment, and object data.” See at least [0012-0013], wherein the sensor data is physical media reference data.)
(ii) Application of historical processes, product, environment and resources data located in a database; (“integration of electronic libraries of mini-manipulations with transformed robotic instructions for replicating movements, processes, and techniques with real-time electronic adjustments.” See at least [0002], wherein the electronic libraries of mini-manipulations includes historical processes, product, environment and resources data according to the citations above (See at least [0535] and [0786]).)
(iii) Control of digital information, the reference data and historical data combined for automatic generation of trajectories for autonomous systems; (“During the food preparation process, the robotic apparatus 75 uses touch signals generated by sensors in the fingertips and the palms of a robot's hands to detect force, temperature, humidity and toxicity as the robot replicates step-by-step movements and compares the sensed values with the tactile profile of the chef's studio cooking program. Visual sensors help the robot to identify the surroundings and take appropriate cooking actions. The robotic apparatus 75 analyzes the image of the immediate environment from the visual sensors and compares it with the saved image of the chef's studio cooking program, so that appropriate movements are made to achieve identical results.” See at least [0509]; “Raw data is collected at each point in time to allow the raw data to be processed to be able to extract the shape, dimension, location and orientation of all objects of importance to the different steps in the multiple sequential stages of dish replication in the standardized robotic kitchen 50 in a step 1162. The processed data is further analyzed by the computer system to allow the controller of the standardized robotic kitchen to adjust robotic arm and hand trajectories and minimanipulations, by modifying the control signals defined by the robotic script. Adaptations to the recipe-script execution and thus control signals is essential in successfully completing each stage of the replication for a particular dish, given the potential for variability for many variables (ingredients, temperature, etc.).” See at least [0547]; Examiner Interpretation: Adapting the control of the robot using the stored minimanipulation data (historical data) and the raw sensor data (reference data) is equivalent to the control of digital information.)
(iv) Action of an anti-collision subsystem by means of a self-protection module in monitoring of automatic trajectories of the autonomous systems; (“To use a cached trajectory for an APSB, either all of the saved environment needs to be identical with the current environment, or the saved trajectory needs to be checked for collisions in the current environment. To check the whole saved trajectory for collisions with the environment in an efficient manner, its bounding volume is calculated and saved together with it. This allows testing saved trajectories for validity in real time.” See at least [0855-0856]; “The processors 5002r-2 can also share data, such as information about workspace models that define the robotic assisted workspaces (e.g., robotic assisted workspace 5002w). Such information can include, for example, data about objects in the workspace, including their position, size, types, materials, gravity directions, weights, velocities, expected positions and the like. Using this information, the low-level processor 5002r-2b corresponding to a particular part of the robot anatomy 5002r-1 (e.g., end effector 5002r-1c) can control that part to interact with or manipulate objects more effectively, for instance, by avoiding collisions.” See at least [0915]; “This camera location helps to observe the whole workspace and update it's virtual model, that can be used for collision avoidance, motion planning and etc.” See at least [0985])
(v) Use of Artificial Intelligence (AI) algorithms for active control and modification of the automatic trajectories of the autonomous systems; (“The present disclosure relates to fields of robotics and artificial intelligence (AI). … integration of electronic libraries of mini-manipulations with transformed robotic instructions for replicating movements, processes, and techniques with real-time electronic adjustments.” See at least [0002]; “learning algorithms monitor each and every motion/interaction sequence and perform simple variable-perturbations to ascertain outcome to decide on if/how/when/what variable(s) and sequence(s) to modify in order to achieve a higher level of execution fidelity at levels ranging from low-to high-levels of various MMLs.” See at least [0052]; Also see at least [0383], [0992], and [0996-0998] for reinforcement learning.)
(vi) Control and monitoring of resources and parameters; and (vii) Automatic inspection and generation of historical data from processes, product, environment and resources for database feedback. (“The architecture of the software-module/action layer provides a framework that allows the inclusion of: (1) refined Endeffector sensing (for refined and more accurate real-world interface sensing); (2) introduction of the macro-(overall sensing by and from the articulated base) and micro-(local task-specific sensing between the endeffectors and the task-/cooking-specific elements) tiers to allow continuous minimanipulation libraries to be used and updated (via learning) based on a physical split between coarse and fine manipulation (and thus positioning, force/torque control, product-handling and process monitoring);” See at least [0767]; “Said motion commands are sequentially fed to an execution block 3613, which controls all instrumented articulated and actuated joints in at least joint- or Cartesian space to ensure the movements track the commanded trajectories in position/velocity and/or torque/force. A feedback sensing block 3614 provides feedback data from all sensors to the execution block 3613 as well as an environment perception block/module 3611 for further processing. Feedback is not only provided to allow tracking the internal state of variables, but also sensory data from sensor measuring the surrounding environment and geometries. Feedback data from said module 3614 is used by the execution module 3613 to ensure actual values track their commanded setpoints, as well as an environment perception module 3611 to image and map, model and identify the state of each articulated element, the overall configuration of the robot as well as the state of the surrounding environment the robot is operating in. Additionally, said feedback data is also provided to a learning module 3615 responsible for tracking the overall performance of the system and comparing it to known required performance metrics, allowing one or more learning methods to develop a continuously updated set of descriptors that define all mini-manipulations contained within their respective mini-manipulation library 3630, in this case the macro-level mini-manipulation sublibrary 3631.” See at least [0772]; Examiner Interpretation: The feedback/sensing is the monitoring and inspection. The feedback data is generated for updating the minimanipulation library/database (learning) and therefore is generated for database feedback.
Regarding Claim 2,
Oleynik further teaches
wherein the physical media reference data comprises results from a plurality of sensors displayed in environments wherein the autonomous systems perform the generated trajectories. (“the position data and the image data are obtained from the one or more sensors, wherein the one or more sensors comprises at least one of a navigation system and one or more image capturing devices. In some embodiments, detecting the one or more objects is based on at least one of the type of the current environment, the environment data corresponding to the current environment, and object data.” See at least [0012-0013], wherein the sensor data is physical media reference data.; “Raw data is collected at each point in time to allow the raw data to be processed to be able to extract the shape, dimension, location and orientation of all objects of importance to the different steps in the multiple sequential stages of dish replication in the standardized robotic kitchen 50 in a step 1162. The processed data is further analyzed by the computer system to allow the controller of the standardized robotic kitchen to adjust robotic arm and hand trajectories … The process of recipe-script execution based on key measurable variables is an essential part of the use of the augmented (also termed multi-modal) sensor system 20 during the execution of the replicating steps for a particular dish in a standardized robotic kitchen 50.” See at least [0547])
Regarding Claim 3,
Oleynik further teaches
wherein the physical media reference data are compared with digital model data for feeding the platform with confirmations or alterations of the generated trajectories. (“During the food preparation process, the robotic apparatus 75 uses touch signals generated by sensors in the fingertips and the palms of a robot's hands to detect force, temperature, humidity and toxicity as the robot replicates step-by-step movements and compares the sensed values with the tactile profile of the chef's studio cooking program. Visual sensors help the robot to identify the surroundings and take appropriate cooking actions. The robotic apparatus 75 analyzes the image of the immediate environment from the visual sensors and compares it with the saved image of the chef's studio cooking program, so that appropriate movements are made to achieve identical results.” See at least [0509]; “Raw data is collected at each point in time to allow the raw data to be processed to be able to extract the shape, dimension, location and orientation of all objects of importance to the different steps in the multiple sequential stages of dish replication in the standardized robotic kitchen 50 in a step 1162. The processed data is further analyzed by the computer system to allow the controller of the standardized robotic kitchen to adjust robotic arm and hand trajectories and minimanipulations, by modifying the control signals defined by the robotic script. Adaptations to the recipe-script execution and thus control signals is essential in successfully completing each stage of the replication for a particular dish, given the potential for variability for many variables (ingredients, temperature, etc.).” See at least [0547])
Regarding Claim 4,
Oleynik further teaches
wherein the historical data are used as reference for parameters for new products. (“FIG. 23 is a flow diagram illustrating the process 926 of identifying a non-standard object through three-dimensional modeling and reasoning. At step 928, the computer 16 detects a non-standard object by a sensor, such as an ingredient that may have a different size, different dimensions, and/or different weight. At step 930, the computer 16 identifies the non-standard object with three-dimensional modeling sensors 66 to capture shape, dimensions, orientation and position information and robotic hands 72 make a real-time adjustment to perform the appropriate food preparation tasks (e.g. cutting or picking up a piece of steak). … A minimanipulation or an action primitive may involve the robotic hand 72 and a standard object, or the robotic hand 72 and a nonstandard object. … The parameters for a particular minimanipulation may differ depending on the complexity and objects that are necessary to perform the minimanipulation. In this example, four parameters are identified: the starting XYZ position coordinates in the volume of the standardized kitchen module, the speed, the object size, and the object shape. Both the object size and the object shape may be defined or described by non-standard parameters.” See at least [0532-0535], wherein the identified object or non-standard object is a new product.)
Regarding Claim 7,
Oleynik further teaches
wherein the Artificial Intelligence (AI) algorithms, for active control and modification of the automatic trajectories of the autonomous systems use data originating from the digital model, the plurality of sensors and the anti-collision subsystem for learning the modifications in the trajectories and behavior of the autonomous systems. (“executing said commands through position or velocity or joint or force based control at the joint-actuator level, and providing sensory data back to the macro-manipulation control and perception subsystems, while also monitoring all processes to allow for learning algorithms to provide improvements to the mini-manipulation macro-level command-library to improve future performance based on criteria such as execution-time, energy-expended, collision-avoidance, singularity-avoidance and workspace-reachability.” See at least [0798], wherein the learning algorithms are AI algorithms.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oleynik (US 20190291277 A1) in view of Junio (US 20230113312 A1).
Regarding Claim 5,
wherein the anti-collision subsystem comprises a plurality of sensors which monitor an environment forming allowed volumes at points of the trajectories and issuing recalculation commands whenever these allowed volumes are violated. (“The navigation system 118 may include one or more cameras or other sensor(s) for tracking one or more reference markers, navigated trackers, or other objects within the operating room or other room in which some or all of the system 100 is located.” See at least [0068]; “the pose of the object may be received from a navigation camera of a navigation system such as the navigation system 118.” See at least [0079]; “The method 200 comprises generating one or more no-fly zones (step 204). The one or more no-fly zones may correspond to a section of a work volume defined as inaccessible or off limits to a robotic arm such as the robotic arm 116 and/or any portion of a robot such as the robot 114. The work volume defines a volume of space surrounding or in a patient in which the robotic arm may access. … the work volume may be defined by defining a volume of the operating room and subtracting the one or more no-fly zones from the volume. The volume of the operating room may be determined by, for example, a processor such as the processor 104 based on information about the operating room. The information may be, for example, an image, sensor data, Lidar data, and/or electro-magnetic data.” See at least [0074-0075]; “the step 312 comprises repeating the steps 204 and 212 of method 200 described above. In other words, the one or more no-fly zones may be re-generated (step 204) and/or the obstacles map may be regenerated (step 212) based on the updated pose of the object. In such embodiments, the path may be updated based on the regenerated one or more no-fly zones and/or the regenerated obstacles map.” See at least [0099]; Also see at least [0084-0090] for monitoring the robot and object (no-fly zone) positions and control signaling to cause the robotic arm to move away from the object when a threshold distance is exceeded.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Oleynik to further include the teachings of Junio with a reasonable expectation of success to identify accessible/inaccessible work volumes to improve discernment of the environment for collision avoidance. (See at least [0074-0076])
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oleynik (US 20190291277 A1) in view of Zak (US 20230001587 A1) and Medasani (US 20150294496 A1).
Regarding Claim 6,
Oleynik does not explicitly teach, but Zak teaches
wherein the anti-collision subsystem further comprises a warning area (“the method for moving an item 21 using a fenceless robot system 100 also includes detecting the person in one of a plurality of zones around the robot being an outer safety zone; and limiting a speed of the robot 30 in response to detecting the person in outer safety zone. … the method for moving an item 21 using a fenceless robot system 100 also includes detecting the person in one of a plurality of zones around the robot being an inner safety zone located between the robot and the outer safety zone; and immediately stopping the robot from moving in response to detecting the person in the inner safety zone” See at least [0074-0075], wherein the outer safety zone is a warning area and an inner safety zone is an immediate stop area.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Oleynik to further include the teachings of Medasini with a reasonable expectation of success to improve safety while maintaining robot efficiency (See at least [0005-0010]).
Zak also does not explicitly teach, but Medasani teaches
a warning area for recalculation of trajectory (“The warning zone and the critical zones (as well as any other zones desired to be configured in the system, including dynamic zones) are operating areas where alerts are provided, as initiated in block 54, when the person has entered the respective zone and is causing the equipment to slow, stop or otherwise avoid the person.” See at least [0078]; “the human monitoring system can adjust and dynamically reconfigure the automated moveable factory equipment to avoid potential interactions with the person of within the workspace area without having to stop the automated equipment. This may include determining and traversing a new path of travel for the automated moveable equipment.” See at least [0108])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Oleynik and Zak to further include the teachings of Medasani with a reasonable expectation of success to “dynamically reconfigure the automated moveable factory equipment to avoid potential interactions with the person of within the workspace area without having to stop the automated equipment” (See at least [0108]) and therefore further improving productivity while maintaining safety.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Huang (US 20200171671 A1) is pertinent because it discusses AI to recognize a command given to a particular robot, assess the environment, and determine a best course of action based on historical successful performed tasks provided in similar environments or given similar obstacles.
Lin (US 20220063099 A1) is pertinent because it discusses adapting a stored trajectory to the current environment and storing the trajectories in a repository/database for later use.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Karston G Evans whose telephone number is (571)272-8480. The examiner can normally be reached Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571)270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KARSTON G. EVANS/Examiner, Art Unit 3657