Prosecution Insights
Last updated: April 19, 2026
Application No. 18/558,772

Robot Device Configured to Determine an Interaction Machine Position of at Least One Element of a Predetermined Interaction Machine, and Method

Non-Final OA §103
Filed
Nov 03, 2023
Examiner
SINGH, ESVINDER
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
147 granted / 195 resolved
+23.4% vs TC avg
Strong +24% interview lift
Without
With
+23.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
226
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
57.0%
+17.0% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 195 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This is a nonfinal in response to an RCE filed on 02/20/2026. Claims 11 and 13-20 remain pending. Claims 11 and 20 have been amended. Information Disclosure Statement The Information Disclosure Statement filed on 02/20/2026 has been considered. An initialed copy of the Form 1449 is enclosed herewith. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 11, 13-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mason et al (US 20160132059 A1) in view of Himane et al (US 11915097 B1) and Taylor et al (US 20200376671 A1) (Hereinafter referred to as Mason, Himane, and Taylor respectively) Regarding Claims 11 and 20, Mason discloses a robot device (See at least Mason Paragraphs 0073-0075, the first robotic device is interpreted as the robot device), a method for determining an interaction machine position of at least one element of a predetermined interaction machine with respect to a robot device (See at least Mason Paragraphs 0003, 0096, and Figure 4), comprising: an optical detection device configured to detect…a surrounding area image of an area surrounding the robot device (See at least Mason Paragraphs 0094, 0100, and 0102, the camera is interpreted as the optical detection device); and a control device (See at least Mason Paragraphs 0003, 0039, and Figure 1b, the control system is interpreted as the control device) in which a predetermined reference marking and a predetermined reference position of the reference marking with respect to at least one element of an interaction machine are stored (See at least Mason Paragraphs 0035, 0096, 0098, and 0109-0110, the predetermined reference marking/tag is stored in a database and the position of the tag/marker with respect to an element/component of an interaction machine/second robotic device, which includes vehicles, is also stored in the database), wherein the control device is configured to: detect…an image detail that shows the reference marking of the interaction machine in the surrounding area image of the area surrounding the robot device (See at least Mason Paragraphs 0094, 0100, and 0102, the image with the reference marking/tag is detected), detect the predetermined reference marking in the image detail (See at least Mason Paragraphs 0094, 0100, and 0102, the reference marking/tag is detected in the image), determine…a distortion of the predetermined reference marking in the image detail (See at least Mason Paragraph 0096, the difference in apparent position of the tag/marking is interpreted as a distortion), determine a spatial position of the reference marking with respect to the robot device from the distortion of the reference marking (See at least Mason Paragraphs 0096-0098, the relative positioning of the marking/tag with respect to the first robotic device is determined from the distortion/difference in apparent position), determine an interaction machine position of at least one element of the interaction machine with respect to the robot device from the spatial position of the reference marking with respect to the robot device and the reference position of the reference marking with respect to the at least one element of the interaction machine (See at least Mason Paragraphs 0093, 0096-0098 and 0108, the relative positioning between elements/components of the interaction machine/second robotic device and the first robotic device are determined from the spatial position of the marking/tag and the position of the marking/tag with respect to the element/component of the interaction machine/second robotic device), and subject the robot device to closed-loop control and/or open-loop control for performing a predetermined interaction with the at least one element of the interaction machine in the interaction machine position (See at least Mason Paragraphs 0080, 0091-0093, 0101, and 0103, the robot is subject to closed loop control by using feedback regarding relative positioning for performing a collaborative operation/predetermined interaction with the interaction machine/second robotic device). Even though Mason teaches detecting a surrounding area image of an area, detecting an image detail that shows the reference marking, and determining a distortion of the predetermined reference marking, Mason fails to disclose detecting a surrounding area image of an area…with machine learning methods, detecting an image detail that shows the reference marking…using machine learning methods, and determining a distortion of the predetermined reference marking…using machine learning methods. However, Himane teaches detecting a surrounding area image of an area…with machine learning methods (See at least Himane Column 12 line 66-Column 13 line 20 and Figure 5, the machine learning method is used to detect a surrounding area of images of the physical environment), detecting an image detail that shows the reference marking…using machine learning methods (See at least Himane Column 11 lines 3-25, Column 13 lines 4-38 and Figure 5, the machine learning method is used to detect details that show the reference marking/visual marker), and determining a distortion of the predetermined reference marking…using machine learning methods (See at least Himane Column 13 lines 4-63, the machine learning method is used to rectify the visual marker and perform disparity-based estimation, which is interpreted as determining a distortion of the reference marking). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the teachings disclosed in Mason with Himane to use machine learning methods to detect a surrounding area image of an area, detect an image detail that shows the reference marking, and determine a distortion of the predetermined reference marking. Himane teaches that machine learning methods are used for image analysis to detect and rectify visual markers in order to determine the position and orientation relative to the visual marker (See at least Himane Column 13 lines 4-63). Therefore, one of ordinary skill in the art would be motivated to use machine learning methods to identify the image detail/marking since machine learning methods are commonly used for detecting and identifying image details/markers. Modified Mason fails to disclose adapt one or more algorithms utilized in the machine learning methods based on performance of the robot device in the closed-loop control and/or open-loop control of the predetermined interaction with the at least one element of the interaction machine in the interaction machine position. However, Taylor teaches adapt one or more algorithms utilized in the machine learning methods based on performance of the robot device in the closed-loop control and/or open-loop control of the predetermined interaction with the at least one element of the interaction machine in the interaction machine position (See at least Taylor Paragraphs 0091-0093, the robot assemblies software, which is interpreted as algorithms utilized in machine learning methods, is modified using artificial intelligence based on performance of the robot device during closed loop control using sensor feedback of the predetermined interaction with the element of the interaction machine/vehicle in the interaction machine/vehicle position). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the teachings disclosed in modified Mason with Taylor to adapt one or more algorithms utilized in the machine learning methods based on performance of the robot device in the closed-loop control of the predetermined interaction with the at least one element of the interaction machine. This modification, as taught by Taylor, would allow the robot to modify its future performance of tasks based on previous performance of the same or similar tasks by the same robot (See at least Taylor Paragraph 0093), which would improve the performance of the robot. Regarding Claim 13, modified Mason fails to disclose the machine learning methods comprise a neural network. However, Himane teaches the machine learning methods comprise a neural network (See at least Himane Column 13 lines 4-38, the machine learning method is a neural network). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the teachings disclosed in modified Mason with Himane to have the machine learning method comprise a neural network. Neural networks, as taught by Himane, are used for image analysis to detect and rectify visual markers in order to determine the position and orientation relative to the visual marker (See at least Himane Column 13 lines 4-63). Therefore, one of ordinary skill in the art would be motivated to use a neural network as the machine learning method since neural networks are commonly used for detecting and identifying image details/markers. Regarding Claim 14, modified Mason teaches the reference marking is a barcode and/or an area code (See at least Mason Paragraph 0109, the marking is a barcode). Regarding Claim 15, modified Mason teaches the predetermined interaction comprises: a transfer of a target object by the robot device to the interaction machine, and/or a transfer of the target object by the interaction machine to the robot device (See at least Mason Paragraphs 0103-0104, the robot device transfers the target object to the AGV/interaction machine). Regarding Claim 16, modified Mason teaches the predetermined interaction comprises driving the robot device onto and/or into the interaction machine (See at least Mason Paragraphs 0035, 0072-0073 and Figures 1B, 3E, and 4, the robot device “318” is driven into truck “320”, which is interpreted as the interaction machine). Regarding Claim 17, modified Mason teaches the robot device is configured as a forklift truck (See at least Mason Paragraph 0064 and Figure 2D). Regarding Claim 18, modified Mason teaches the robot device is configured as a gripper robot or crane (See at least Mason Paragraph 0056 and Figure 2A, the robot device is a gripper robot). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Mason in view of Himane and Taylor, and in further view of Satat et al (US 20220168909 A1) (Hereinafter referred to as Satat) Regarding Claim 19, modified Mason fails to disclose the optical detection device comprises two cameras configured to: generate the surrounding area image of the area surrounding the robot device from at least two partial images from the respective cameras, and record the partial images from different perspectives. However, Satat teaches the optical detection device comprises two cameras configured to: generate the surrounding area image of the area surrounding the robot device from at least two partial images from the respective cameras (See at least Satat Paragraphs 0003, and 0105-0106, the robot has two cameras that generate the surrounding area image by combining the two partial images), and record the partial images from different perspectives (See at least Satat Paragraphs 0079-0080 and Figure 5, the two cameras have different perspectives/field of view). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the teachings disclosed in modified Mason with Satat to record partial images of the surrounding area of the robot from different perspectives from two cameras. This modification, as taught by Satat, would allow the system to generate a 360-degree panoramic image by combining the partial images taken from different perspectives (See at least Satat Paragraphs 0003, and 0105-0106), which would increase the field of view of the robot. Response to Arguments Applicant’s arguments with respect to claims 11 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant has amended the independent claims to include the limitation “adapt one or more algorithms utilized in the machine learning methods based on performance of the robot device in the closed-loop control and/or open-loop control of the predetermined interaction with the at least one element of the interaction machine in the interaction machine position”. This limitation is taught by newly added reference, Taylor, which teaches a robot that performs an interaction with an element of an interaction machine. The robot uses its camera to detect a reference marking and determines the relative position of the interaction machine based on the detected reference marking. The robot performs closed loop control by using sensor feedback for performing a predetermined interaction with the element of the interaction machine at the interaction machine position. The software/algorithm is then modified by artificial intelligence using a machine learning method based on the performance of the interaction. Therefore, the claims still stand rejected under 103. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ESVINDER SINGH whose telephone number is (571)272-7875. The examiner can normally be reached Monday-Friday: 9 am-5 pm est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at 571-270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ESVINDER SINGH/Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Jun 23, 2025
Non-Final Rejection — §103
Sep 11, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103
Feb 06, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596372
METHOD FOR CONTROLLING MOVEMENT OF MOVING BODY AND RELATED DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12583120
MANAGEMENT SERVER, REMOTE OPERATION SYSTEM, REMOTE OPERATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12583121
CALIBRATION APPARATUS FOR CALIBRATING MECHANISM ERROR PARAMETER FOR CONTROLLING ROBOT
2y 5m to grant Granted Mar 24, 2026
Patent 12585278
ROBOT NAVIGATION
2y 5m to grant Granted Mar 24, 2026
Patent 12583118
ROBOTIC DEVICE WORKSPACE MAPPING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+23.7%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 195 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month