Prosecution Insights
Last updated: April 19, 2026
Application No. 18/872,077

A METHOD OF REAL-TIME CONTROLLING A REMOTE DEVICE, AND TRAINING A LEARNING ALGORITHM

Non-Final OA §102
Filed
Dec 05, 2024
Examiner
PECHE, JORGE O
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Look-E B V
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
97%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
469 granted / 583 resolved
+28.4% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
28 currently pending
Career history
611
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 583 resolved cases

Office Action

§102
DETAILED ACTION Drawings The drawings 1A and 1B are objected to under 37 CFR 1.83(a) because they fail to show corresponding names for drawing structures/devices and /or names for drawing steps/methods as described in the specification. The cited drawing(s) are a series of indistinct blank boxes devoid of labels; such labels would facilitate an understanding of the invention without undue searching of the specification. The present drawings do not immediately convey any information and should be amended so that one looking at the drawings may quickly determine what elements they are looking at. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 9 is objected to because of a typographical error within the limitation “ … the r first emote device … ” (added remark). Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yuan et al.’s (Pub. No.: US 20210053229 A1). Regarding claim 1, Yuan et al. disclose a method for coordinating agricultural robots to perform agricultural task(s), the method comprising: obtaining graphical data of surroundings of the first remote device (e.g., obtaining 2D / 3D vision data from a robot camera / vision sensor 209 (par. 35, 39, 42,and 5 and Figures 1-2)); sending the graphical data to a remote operation device (e.g., agriculture task system / vision data analysis engine 112 receives 2D / 3D vison data from the robot camera / vision sensor 209 (par. 35), which covers sending 2D / 3D vison data by the robot); obtaining user input data from an operator, which user input data is indicative of a location of interest in the graphical data (e.g., human worker manually enters input observation about plant(s) into database (par. 35), which covers plants location); generating a control signal for controlling the first remote device to perform the task based on the user input data (e.g., generate an agriculture task (par. 39) to be performed by the robot 108 (par. 37) based on human / agricultural personnel input (par. 35 and 45)); and using the control signal for controlling the first remote device to perform the task at or near the location of interest (e.g., the robot configured to perform the generated agriculture task based on designated plant(s) and coordination (par. 46 and 35) and other operator input (par. 71)); wherein the user input data is further used as training data for training a machine learning algorithm (e.g., training a machine learning model / convolutional neural network to predict a measure of robot performability associated with an agricultural task (par. 69 and 47) ) , which algorithm is arranged for one or more of: generating at least part of a second control signal for controlling the first remote device (e.g., if it determined that the robot is capable of performing the task automatically, the robot is permitted to perform the task itself (par. 47)); Regarding claim 2, Yuan et al. disclose a method for coordinating agricultural robots, wherein the first remote device is positioned on a volume of sand (e.g., Figure 2 shows a robot traveling on a ground with multiple plans (par. 51 and Figure 2), which covers the ground having a volume of sand). Regarding claim 3, Yuan et al. disclose a method for coordinating agricultural robots, wherein the first remote device is a weeding robot (e.g., the robot )and wherein the task comprises a task of damaging, destroying or removing a weed (e.g., the robot configured to remove weeds (par. 47)). Regarding claim 4, Yuan et al. disclose a method for coordinating agricultural robots, wherein the first remote device is a garbage robot or a litter removal robot and wherein the task comprises a task of removing garbage (e.g., the robot configured to remove weeds (par. 47 and 20), wherein the weed is considered as garbage on a farmland). Regarding claim 5, Yuan et al. disclose a method for coordinating agricultural robots, wherein the machine learning algorithm is trained in real time using the user input data provided by the operator for real-time controlling the first remote device (e.g., training a machine learning model to predict a measure of robot performability associated with an agricultural task (par. 69-70) and allow a robot to perform the task itself (par. 47) ). Regarding claim 6, the claim limitations recited features on alternative form of rejected claim 1; therefore, Yuan et al.’s invention still read on the claimed combination alternative form. Regarding claims 7-8, the claim limitations recited features on alternative form of rejected claim 1; therefore, Yuan et al.’s invention still read on the claimed combination alternative form. Regarding claim 9, Yuan et al. disclose a method for coordinating agricultural robots, wherein the remote operation device is positioned at a distance from the first remote device wherein the [ [r] ] first [r]emote device is out of sight from the remote operation device (e.g., Figure 1 shows agriculture task system / vision data analysis engine 112 at remote location away from the robot – Figure 1 and related disclosure). Regarding claim 10, Yuan et al. disclose a method for coordinating agricultural robots, wherein the user input data is transmitted to the first remote device, and the control signal is generated by the first remote device (e.g., human operator control the robot to perform an agricultural task via a client device (par. 47 and Figure 1)). Regarding claim 11, Yuan et al. disclose a method for coordinating agricultural robots, wherein the second control signal is generated by the remote operation device, and the control signal is transmitted to the first remote device (e.g., if it determined by agriculture task system / vision data analysis engine 112 that the robot is capable of performing the task automatically, the robot is permitted to perform the task itself (par. 47 and 39)). Regarding claim 12, Yuan et al. disclose a method for coordinating agricultural robots, further comprising: obtaining additional graphical data on the location of interest after controlling the first remote device to perform the task at or near the location of interest (e.g., as the robot automatically performs agricultural task, image of the task performed on the plant is obtained for robot performability score (par. 70)); and using the additional graphical data as training data for training the machine learning algorithm (e.g., training example is used to label the robot performability score (par. 70)). Regarding claim 13, Yuan et al. disclose a method for coordinating agricultural robots, further comprising: providing the additional graphical data to the operator (e.g., image of the task performed on the plant is provided to the human operator (par. 70)); obtaining additional user input data from the operator indicative of an evaluation of the task performed at the location of interest (e.g., providing low or high robot performability score based on plant depicted in a training example(s) (par. 70)): and using the additional user input data as training data for training the machine learning algorithm (e.g., using the high performability score as a positive training example for the robot (par. 70) ). Regarding claim 14, the claim limitations recited features on alternative form of rejected claim 1; therefore, Yuan et al.’s invention still read on the claimed combination alternative form. Regarding claim 15, Yuan et al. disclose a method for coordinating agricultural robots, wherein second graphical data of surroundings of a second remote device is provided to the operator (e.g., obtaining 2D / 3D vision data from a second deployed robot camera / vision sensor 209 (par. 35, 39, 42,and 5 and Figures 1-2)) from a plurality of robots (par. 21, 5), second user input data is obtained from the operator indicative of locations of interest in the second graphical data of the second remote device (e.g., human worker manually enters input observation about plant(s) into database (par. 35) for the second deployed robot (par. 21, 5), which covers plants location), a plurality of additional control signals are generated for controlling the second remote device (e.g., generate agriculture tasks (par. 39) to be performed by the second robot 108 (par. 37), and the second user input data is further used as second training data for training the machine learning algorithm (e.g., training a machine learning model / convolutional neural network to predict a measure of robot performability associated with an agricultural task (par. 69 and 47) ). Regarding claim 16, Yuan et al. disclose a method for coordinating agricultural robots, wherein the algorithm is arranged for one or more of: generating at least a part of a third control signal for controlling a second remote device (e.g., deploying a second robot from a plurality of robot to perform a particular agricultural task (par. 21, 5) using machine learning model / convolutional neural network (par. 47 and 69)) Regarding claim 17, Yuan et al. disclose a method for coordinating agricultural robots, wherein the location of interest represents a single location (e.g., location of a plant for a robot to perform an agriculture task (par. 5 and 21-22) or distances between candidate agricultural tasks' targeted plants (par. 49)), Regarding claim 18, Yuan et al. disclose a method for coordinating agricultural robots, wherein the location of interest represents one or more of an area e.g., Figure 3C shows an area of visual data related to plant for performing agricultural task (par. 59-60 and Figure 3C) ) Regarding claim 19, Yuan et al. disclose a method for coordinating agricultural robots, further comprising; obtaining, based on the user input data indicative of the location of interest (e.g., human worker manually enters input observation about plant(s) into database (par. 35), which covers plants location), further graphical data of the location of interest (e.g., vision data analysis engine 112 applies annotations to vision data to aid in manually controlling the robot for performing agricultural task); and storing the further graphical data (e.g., wherein the annotations data is stored on one or more database 118 (par. 33, 38)). Regarding claim 20, the claim limitations recited features on alternative form of rejected claims 1 and 6; therefore, Yuan et al.’s invention still read on the claimed combination alternative form. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jorge O. Peche whose telephone number is (571)270-1339. The examiner can normally be reached Monday-Friday 8:30 AM - 5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi H. Tran can be reached at 571 272 6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jorge O Peche/Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Dec 05, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592158
System for Using a Space Integration Sequencer System
2y 5m to grant Granted Mar 31, 2026
Patent 12583444
PARKING SUPPORT METHOD, PARKING SUPPORT APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12570255
VEHICLE AND A METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12565231
AUTONOMOUS DRIVING CONTROL APPARATUS AND METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12555484
SYSTEMS AND METHODS FOR DIMINISHING VEHICLE CONTRAILS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
97%
With Interview (+17.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 583 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month