Prosecution Insights
Last updated: April 19, 2026
Application No. 18/754,893

METHOD FOR OPERATING A COLLABORATIVE ROBOT AND COLLABORATIVE ROBOT FOR CARRYING OUT SAID METHOD

Non-Final OA §102§103
Filed
Jun 26, 2024
Examiner
TANG, BRYANT
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Neura Robotics GmbH
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
87%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
55 granted / 61 resolved
+38.2% vs TC avg
Minimal -3% lift
Without
With
+-3.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
86
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 61 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The information disclosure statements (IDS) submitted on June 26th and November 20th of 2024, and August 1st, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claim 10 is objected to because of the following informalities: Claim 10 Line 1: “A collaborative robot (1, 2) for carrying out a method […]” should be revised to “A collaborative robot (1, 2) for carrying out the method […]” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitations are: “by means of a camera” and “by means of a projector” in claim 1. Because these claim limitations are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof. If applicant intends to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitations do not recite sufficient structure, materials, or acts to perform the claimed function. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 4, 6 and 8-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cabrera et al. (“MaskBot: Real-time Robotic Projection Mapping with Head Motion Tracking”), herein “Cabrera”. Regarding Claims 1 and 10, Cabrera discloses a collaborative robot and method for operating a collaborative robot (1, 2), wherein the collaborative robot (1, 2) monitors a spatial region (16) by means of a camera (14, 31) and wherein the collaborative robot (1, 2) projects information into a projection region (15), which is at least a partial region of the spatial region (16) monitored by the camera (14, 31), by means of a projector (13, 32) which has a fixed geometric relationship with the camera (14, 31) (See Fig. 1 shown below and Abstract, “[…] real-time projection mapping system guided by a 6 Degrees of Freedom (DoF) collaborative robot. The collaborative robot locates the projector and camera in front of the user’s face to increase the projection area and reduce the system’s latency. A webcam is used to detect the face orientation and to measure the robot-user distance. Based on this information we modify the projection size and orientation. MaskBot projects different images on the user’s face” See also Section 2, “[…] webcam and projector are attached to the robot’s end-effector […] the robot moves towards the human face. In parallel, the projection algorithm lays an image on the particular area of the face […] the system is able to change the projection plane and to produce projection on other objects.” Examiner notes the projection surface of the projector is on the human face, where the camera’s field of view resides, meaning the projection region is always within at least a partial region of the spatial region monitored by the camera. Furthermore, the camera and projector are rigidly mounted on a common bracket attached to the robot’s end-effector, thus preserving a fixed geometric relationship). PNG media_image1.png 230 276 media_image1.png Greyscale Regarding Claims 2 and 11, Cabrera further discloses the collaborative robot and method according to claims 1 and 10, characterized in that the camera (14, 31) and the projector (13, 23) are moved together during operation of the collaborative robot (1, 2) with a component which is movably arranged on the collaborative robot (1, 2) and on which the camera (14, 31) and the projector (13, 32) are fixedly arranged in order to define the fixed geometric relationship between the projector (13, 32) and the camera (14,31) (See Abstract and Section 2 as referenced above, along with Fig. 1 shown above. Examiner notes the camera and projector are fixedly arranged on the end-effector of the robot arm; so that when the arm moves, the camera-projector pair moves together, thus maintaining alignment). Regarding Claim 4, Cabrera further discloses the method according to claim 2, characterized in that the operating parameters of the projector (13, 32) are changed by the controller of the collaborative robot (1, 2) in such a way that the focus and/or the location onto which the projector (13, 32) projects the information are maintained even when the robot (1, 2) and/or a movable part of the robot (1, 2) on which the camera (14, 31) and the projector (13, 32) are arranged have moved (See Section 1, “[…] increases the projection area avoiding any distortion caused by the user head mis-alignment. The technology behind this project is a Computer Vision(CV) algorithm and a 6 DoF collaborative robot, which controls the orientation of the projector and the camera regardless of the user’s head posture. The position of the human head is tracked with a webcam. When human moves the head, the robotic projector always pursues the human face and emits light towards the user’s skin.”). Regarding Claim 6, Cabrera further discloses the method according to claim 1, characterized in that the information projected by the projector (13, 32) includes information about an upcoming action of the collaborative robot (1, 2) (See Abstract, “MaskBot projects different images on the user’s face, such as face modifications, make-up, and logos.” See also Section 3, “[…] using face landmark detection […] landmarks are detected on the user’s face. These points are used to locate different objects on the face and to fit on the user’s face […] a camera-projector calibration is going to be carried. A series of masks will be projected on the face.” Examiner notes the collaborative robot method of using detected landmarks on a user’s face to determine a series of masks to be projected onto the user’s face is the same as information about an upcoming action of the collaborative robot). Regarding Claim 8, Cabrera further discloses the method according to claim 1, characterized in that the information projected by the projector (13, 32) includes safety-relevant information (See Section 1, “The collaborative robot safety protocols (collision detection, force, and speed limitation) guarantee secure human-robot interaction.”). Regarding Claim 9, Cabrera further discloses the method according to claim 1, characterized in that the information projected by the projector (13, 32) depends on the result of an evaluation of the camera image (See Abstract as referenced above, along with “[…] webcam is used to detect the face orientation and to measure the robot-user distance. Based on this information we modify the projection size and orientation.” Examiner notes this supports camera feedback as the basis for driving projection adjustments). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3 and 12 are rejected under 35 U.S.C. 103 as being obvious over Cabrera et al. (“MaskBot: Real-time Robotic Projection Mapping with Head Motion Tracking”) in view of Lonn (EP Patent Pub. No. 2 052 551 B1). Regarding Claims 3 and 12, Cabrera discloses the collaborative robot and method according to claims 1 and 10, but does not explicitly disclose the method characterized in that a controller of the collaborative robot (1, 2) adapts operating parameters or image data for the projector (13, 23) on the basis of data from the camera image and/or adapts operating data or image data from the camera (14, 31) on the basis of information projected by the projector (13, 32). Lonn, in a similar field of endeavor, teaches a controller of the collaborative robot (1, 2) adapts operating parameters or image data for the projector (13, 23) on the basis of data from the camera image and/or adapts operating data or image data from the camera (14, 31) on the basis of information projected by the projector (13, 32) (See 0042-0043, “[…] a device may include a projector; a camera to capture a picture of a surface; and logic to receive the picture, and adjust a setting of the projector based on the received picture […] a projector includes a camera to capture a picture of an image as it is being projected onto a surface by the projector; and logic to receive the image and the captured picture, compare a feature of the image to a feature of the captured picture, and automatically adjusting an output of the projector when the feature of the captured picture does not substantially match the feature of the image.”). In view of Lonn’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the collaborative robot control method sensing user positions and distance using camera data as disclosed by Cabrera, the method to involve adjustment of projection parameters using camera feedback, with a reasonable expectation of success, since the control method and robot already includes the necessary components into a responsive projection system, and including parameter adjustment based on image data from the camera enhances the precision of the resulting projections as the user or robot moves. Claim 5 is rejected under 35 U.S.C. 103 as being obvious over Cabrera et al. (“MaskBot: Real-time Robotic Projection Mapping with Head Motion Tracking”) in view of Lonn (EP Patent Pub. No. 2 052 551 B1) as applied to claim 3 above, and further in view of Li et al. (US Patent Pub. No. 2009/0245690 A1), herein “Li”. Regarding Claim 5, Cabrera in combination with Lonn teaches the method according to claim 3, but does not explicitly teach the method characterized in that the image evaluation of the images from the camera (14, 31) is calibrated or refined by the controller of the collaborative robot (1, 2) on the basis of an evaluation of images and/or structured light beams projected by the projector (13, 32) into the spatial region monitored by the camera (14, 31). Li, in a similar field of endeavor, teaches the image evaluation of the images from the camera (14, 31) is calibrated or refined by the controller of the collaborative robot (1, 2) on the basis of an evaluation of images and/or structured light beams projected by the projector (13, 32) into the spatial region monitored by the camera (14, 31) (See 0011, “[…] system comprises a projector for projecting a light pattern onto an object in a scene, a camera for obtaining an image of said object, and a computer for controlling the vision system, wherein the computer implements a self-recalibration method. Further, the self-recalibration method comprises defining a camera plane and a projector plane, computing a Homography matrix between the camera plane and the projector plane, and determining a translation vector and a rotation matrix from Homography-based constraints.” Examiner notes the camera-projector pairing is able to refine alignment using structured-light calibration through parameter adjustments based on a calculated matrix using two planes). In view of Li’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the collaborative robot control method comprising the camera and projector able to adapt operating parameters based on camera data or projection information as disclosed by Cabrera in view of Lonn, a recalibration method using structured-light vision processing comprising the camera and projector’s outputs, with a reasonable expectation of success, since this improves the precision of resulting projections and compensates for motion drift and thermal changes that may distort measurements, thus maintaining accuracy in camera data and projection information. Claim 7 is rejected under 35 U.S.C. 103 as being obvious over Cabrera et al. (“MaskBot: Real-time Robotic Projection Mapping with Head Motion Tracking”) in view of Ganesan et al. (“Better Teaming Through Visual Cues: How Projecting Imagery in a Workspace Can Improve Human-Robot Collaboration”), herein “Ganesan”. Regarding Claim 7, Cabrera discloses the method according to claim 1, but does not explicitly disclose the method characterized in that the information projected by the projector (13, 32) includes information about a next action to be taken by a human colleague collaborating with the collaborative robot (1, 2). Ganesan, in a similar field of endeavor, teaches the information projected by the projector (13, 32) includes information about a next action to be taken by a human colleague collaborating with the collaborative robot (1, 2) (See Fig. 5 shown below and Pg. 60 Col. 2, “Throughout the collaboration, human subjects received just-in-time visual signals related to the task. In addition to projecting instructions and information, the system also provided visual feedback regarding the effectiveness of the task currently being carried out by the human.” Examiner notes Fig. 5 below distinctly shows visual cues projected by a projector, with the visual cues indicating “next tasks, actions, intentions”). PNG media_image2.png 422 728 media_image2.png Greyscale In view of Ganesan’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the collaborative robot and control method sensing user positions and distance using camera data for a collaborative operation with a human as disclosed by Cabrera, the projector in the robot system to project visual cues to communicate next tasks or actions to a human collaborator, with a reasonable expectation of success, since the collaborative robot already comprises the necessary components to do so, and conveying actionable task information to human collaborators enhances the cooperative performance between the robot and the human. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bryant Tang whose telephone number is (571)270-0145. The examiner can normally be reached M-F 8-5 CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571)272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRYANT TANG/Examiner, Art Unit 3658 /JASON HOLLOWAY/ Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Jun 26, 2024
Application Filed
Oct 08, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594942
Method and Apparatus for Detecting Complexity of Traveling Scenario of Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12594967
METHOD AND SYSTEM FOR ADDRESSING FAILURE IN AN AUTONOMOUS AGENT
2y 5m to grant Granted Apr 07, 2026
Patent 12583115
ENHANCED VISUAL FEEDBACK SYSTEMS, ENHANCED SKILL LIBRARIES, AND ENHANCED FUNGIBLE TOKENS FOR THE OPERATION OF ROBOTIC SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12558964
VEHICLE PROVIDING NOTIFICATION INFORMATION FOR SAFETY OF A USER
2y 5m to grant Granted Feb 24, 2026
Patent 12548450
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND NON-TRANSITORY STORAGE MEDIUM STORING VEHICLE CONTROL PROGRAM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
87%
With Interview (-3.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 61 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month