Prosecution Insights
Last updated: April 19, 2026
Application No. 18/920,347

DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND COMPUTER MEDIUM

Non-Final OA §102§103§112
Filed
Oct 18, 2024
Examiner
EVANS, KARSTON G
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
100 granted / 143 resolved
+17.9% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 143 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The examiner suggests amending to a new title indicative of the invention for controlling a robot to grab an object. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-8, 14-15, and 20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 5-7, 14-15, and 20 recite a repeated instance of “a first preset task.” It is unclear whether this instance of “a first preset task” is the same task as the described “first preset task” described in the respective parent claims (Claims 3, 12, and 18), “wherein the first preset task is a task of determining, based on the first interactive content, whether the object to be grabbed can be specified in the environment image.” For examination purposes, the examiner interprets claims 5-7, 14-15, and 20 as reciting “[[a]] the first preset task.” Claim 8 recites “the mask and depth image information.” There insufficient antecedent basis for ‘the’ depth image information in the claim. It is unclear what the depth image information is referencing because there is no previous mention in the claim. For examination purposes, it is interpreted that the method additionally comprises obtaining a depth image or depth image information corresponding to the environment image. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 10-11, and 16-17 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Francis, Jr. (US 8452451 B1). Regarding Claims 1, 10, and 16, Francis, Jr. teaches (Claim 1) A data processing method, the method comprising: (“a method and system for using a robot command language is provided. The command language may form a basis for human (user) and robot (agent) communication through natural language. In some examples, a robot may be configured to convert a short-form command given by the user in terms of an action-and-object (e.g., verb-and-noun) into a series of higher-level and lower-level functions for the robot to evaluate and potentially perform.” See at least col. 3, line 66 through col. 4, line 6) (Claim 10) An electronic device, comprising: a processor; and a memory, configured to store executable instructions of the processor, wherein the processor is configured to execute the following operations by executing the executable instructions: (“a robotic device is provided that comprises one or more processors, at least one sensor coupled to the one or more processors and configured to capture data about an environment in a vicinity of the at least one sensor, and memory including instructions stored thereon executable by the one or more processors to perform functions.” See at least col. 2, lines 19-24) (Claim 16) A non-transitory computer-readable storage medium, having a computer program stored thereon, wherein when the computer program is executed by a processor, the following operations are implemented: (“Any of the systems and methods described herein may be provided in a form of instructions stored on a non-transitory, computer readable medium, that when executed by a computing device, cause the computing device to perform functions of the method.” See at least col. 2, lines 34-38) obtaining an instruction from a user and an environment image; (“the method 500 includes capture an image of an object. In an example, a robot may capture many images of objects using any number of sensors, such as a camera (still pictures or video feeds).” See at least col. 11, lines 21-24; “At block 600, a command may be received that is in a short-form of an action descriptor and a target or object (of the action), such as "GET BOOK.”” See at least col. 12, lines 24-26) determining, based on the instruction, the environment image and a preset target interaction model, an object to be grabbed in the environment image that corresponds to the instruction; (“the lower level classification reasoning may include obtaining relational information to the object classification of "BOOK." At block 610, this may include sensory input from an environment of the robot as to what "book" is involved (i.e., using the object recognition method 500 of FIG. 5). This lower level classification may thus further include a determination of which of several "books" may be actually specified or intended. Here again, environmental and user data may be accessed for the determination of which "book"; for instance, the user may be pointing to a book. The robot may thus use this sensory input to conclude that the "book" sensed is intended to be the subject of "GET."” See at least col. 13, lines 17-28) and controlling a grabbing apparatus to grab a target item corresponding to the object to be grabbed. (“A task sequencer program shown at block 614 may then be engaged to execute an output function for performing the subsumed task steps sequentially of getting the selected book. In some examples, lower level of task sequences associated with executing each of a number of possible functions for the command may be determined, and the robot can select a function to perform based on which lower level tasks may be performed in the environment of the robotic device.” See at least col. 14, lines 33-40; “there may be a series of instructions for the robot to (1) travel from its current location to a bookshelf 702, (2) identify a requested book, (3) grasp and remove a particular book 706 from the shelf, (4) travel from the bookshelf 702 to a location of the user 708 who asked the robot 700 to "GET BOOK" and (5) present the book 706 to the user 708.” See at least col. 15, lines 40-46) Regarding Claims 2, 11, and 17 Francis, Jr. further teaches wherein controlling the grabbing apparatus to grab the target item corresponding to the object to be grabbed comprises: controlling the grabbing apparatus to move to a target position where the target item corresponding to the object to be grabbed is located; and controlling the grabbing apparatus to grab the target item at the target position. (“there may be a series of instructions for the robot to (1) travel from its current location to a bookshelf 702, (2) identify a requested book, (3) grasp and remove a particular book 706 from the shelf, (4) travel from the bookshelf 702 to a location of the user 708 who asked the robot 700 to "GET BOOK" and (5) present the book 706 to the user 708.” See at least col. 15, lines 40-46) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3, 5-6, 12, 14-15, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Francis, Jr. (US 8452451 B1) in view of Quinlan (US 10427306 B1). Regarding Claims 3, 12, and 18, Francis, Jr. does not explicitly teach, but Quinlan teaches wherein the instruction from the user is first interactive content of the user, and the determining, based on the instruction, the environment image and a preset target interaction model, an object to be grabbed in the environment image that corresponds to the instruction comprises: analyzing a first preset task, the first interactive content and the environment image by using the target interaction model, to obtain a task execution result corresponding to the first preset task, wherein the first preset task is a task of determining, based on the first interactive content, whether the object to be grabbed can be specified in the environment image; (“Based at least on the searching the map data for the object referenced in the command, the system can determine that the object referenced in the command is present in the spatial region indicated by the gesture (312). For example, the system can determine that the object is included among the objects having locations within the spatial region, based on searching the objects having locations within the spatial region for the object referenced by the command.” See at least col. 19, line 62 through col. 20, line 2) and determining, based on the task execution result, the object to be grabbed in the environment image that corresponds to the instruction. (“In response to determining that the object referenced in the command is present in the spatial region indicated by the gesture, the system controls the robot to perform an action with respect to the object referenced in the command (314). For example, in addition to identifying an object referenced by the command, the system can also determine an action referenced by the command. In addition to determining that the object referenced in the command is present in the spatial region, the system can determine a precise location of the object within the spatial region.” See at least col. 20, lines 13-22) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Francis, Jr. to further include the teachings of Quinlan with a reasonable expectation of success “to use multimodal object identification to improve its object-locating capabilities.” (See at least col. 4, lines 59-60) Regarding Claims 5, 14, and 20, Francis, Jr. does not explicitly teach, but Quinlan teaches wherein the method further comprises: in response to the task execution result being no, obtaining a third preset task, wherein a task type of the third preset task is a task of asking a corresponding question based on the first interactive content; inputting the third preset task, the first interactive content and the environment image into the target interaction model, to obtain corresponding question content; asking the user a question based on the question content; (“In response to the mapping engine 250 failing to identify an object referenced by the command, the mapping engine 250 may cause the robotics system to output data indicating that the robotics system has failed to identify an object referenced by the command. For example, the mapping engine 250 may cause the robotics system to output an error alert, e.g., a textual or audible message, indicating that the robotics system has failed to identify an object referenced by the command. The error alert may prompt the user 202 to provide their command again, or to provide a different command.” See at least col. 16, lines 7-18) obtaining second interactive content replied by the user (receiving a second command for controlling the robot” See at least col. 3, lines 27-58) and using the second interactive content (“receiving second sensor data for a portion of the environment of the robot, the second sensor data being captured by the sensor of the robot, identifying, from the second sensor data, a second gesture of a human that indicates a second spatial region located outside of the portion of the environment described by the second sensor data, searching the map data for the second object referenced in the second command, … determining, based at least on searching the map data for the second object referenced in the second command, that the second object referenced in the second command is present in the third spatial region, and in response to determining that the second object referenced in the second command is present in the third spatial region, controlling the robot to perform a second action with respect to the second object referenced in the second command; the third spatial region is larger than the second spatial region indicated by the second gesture.” See at least col. 3, lines 27-58) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Francis, Jr. to further include the teachings of Quinlan with a reasonable expectation of success “to use multimodal object identification to improve its object-locating capabilities.” (See at least col. 4, lines 59-60) Though Quinlan does not specifically recite obtaining second interactive content replied by the user based on the question content; and using the second interactive content and the question content as new first interactive content, Quinlan teaches indicating that the robot system failed to identify an object referenced by the command and prompting the user to provide a different command (col. 16, lines 7-18). Though Quinlan’s description does not follow through with obtaining the users response to the prompt and performing the troubleshooting, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Francis, Jr. and Quinlan to actually obtain the different command in response to the different command prompt (question) and use that as the new interactive content to improve the robot’s capability to solve challenging tasks by following through and facilitating retrying to successfully execute the first failed task (e.g., finding and grabbing the object that was originally failed to be identified) using more information provided by the user. Regarding Claims 6 and 15, Francis, Jr. further teaches wherein the method further comprises: detecting, based on a preset detection model, whether the environment image contains an alternative object that is of a same category as an object corresponding to the first interactive content, and in response to yes, determining to perform the operation of analyzing a first preset task, the first interactive content and the environment image by using the target interaction model, to obtain a task execution result corresponding to the first preset task. (“the lower level classification reasoning may include obtaining relational information to the object classification of "BOOK." At block 610, this may include sensory input from an environment of the robot as to what "book" is involved (i.e., using the object recognition method 500 of FIG. 5). This lower level classification may thus further include a determination of which of several "books" may be actually specified or intended. Here again, environmental and user data may be accessed for the determination of which "book"; for instance, the user may be pointing to a book. The robot may thus use this sensory input to conclude that the "book" sensed is intended to be the subject of "GET." Once the lower level classification is determined (what "BOOK"), and relational information on "GET" has been determined, the command is then subject to analysis using an action interpreter, as at block 612. Action interpretation may be resident on the robot or on a cloud server, and may be executed to formulate an appropriate, or most likely, action to execute.” See at least col. 13, lines 17-35) Claim(s) 4, 13, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Francis, Jr. (US 8452451 B1) in view of Quinlan (US 10427306 B1) and Tsukui (US 20200368923 A1). Regarding Claims 4, 13, and 19, Modified Francis, Jr. does not explicitly teach, but Tsukui teaches wherein the determining, based on the task execution result, the object to be grabbed in the environment image that corresponds to the instruction comprises: in response to the task execution result being yes, obtaining a second preset task, wherein a task type of the second preset task is a task of specifying, based on the first interactive content, first region information of an image region occupied by the object to be grabbed in the environment image; inputting the second preset task, the first interactive content and the environment image into the target interaction model, to obtain the first region information; (“when the image analyzing portion 40 determines that the workpieces 80 are present in the workpiece storage area WS (inside the workpiece storage box 90), the process proceeds to step S3.” See at least [0070] and fig. 3, wherein the task execution result is yes when the workpiece(s) are present.; “the process proceeds to step S10 (a third image capturing step). Here, the image analyzing portion 40 causes the 2D vision sensor 30 to capture again an image of the workpiece storage area WS from an angle different from an angle (an angle to capture the image of the workpiece storage area WS) at which the two-dimensional image VD as grounds for the determination is captured.” See at least [0075], wherein capturing the image of the workpiece storage area again is equivalent to obtaining the first region information.) and using an object in the first region information as the object to be grabbed in the environment image that corresponds to the instruction. (“When the image analyzing portion 40 determines, in step S6, that the cable 85 and the connectors 81, 82 are recognizable in terms of the workpiece 80 including the uppermost cable 85T (YES), the image analyzing portion 40 determines the workpiece 80 including the uppermost cable 85T as the uppermost workpiece 80T in step S7 (an uppermost workpiece determination step).” See at least [0076]; “the cycle process from steps S1 to S15 was performed until the uppermost workpiece 80T was identified and gripped (that is, until the uppermost workpiece 80T was determined in step S7, and the uppermost workpiece 80T was gripped in step S8).” See at least [0095]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Francis, Jr. to further include the teachings of Tsukui with a reasonable expectation of success to facilitate grasping a workpiece in a cluttered area. (See at least [0006]) Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Francis, Jr. (US 8452451 B1) in view of Quinlan (US 10427306 B1) and Handa (US 20210339392 A1). Regarding Claim 7, Modified Francis, Jr. does not explicitly teach, but Handa teaches wherein the method further comprises: performing legality detection on the first interactive content, and in response to it being determined that the first interactive content is legal, determining to perform the operation of analyzing a first preset task, the first interactive content and the environment image by using the target interaction model, to obtain a task execution result corresponding to the first preset task. (“The pre-evaluation unit 221 is configured to evaluate the pre-suitability of the operation instruction before controlling the robot based on the operation instruction. … The pre-evaluation unit 221 may further be configured to evaluate the pre-suitability based on a suitability of the work object for a type of the automated work. … the support availability determination unit 222 determines that the support information can be adopted when the pre-suitability of the support information exceeds a predetermined adoption threshold, and determines that the support information cannot be adopted when the pre-suitability of the support information is below the adoption threshold.” See at least [0045-0046], wherein the content is determined to be legal when the pre-suitability of the support information exceeds a predetermined adoption threshold.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Francis, Jr. to further include the teachings of Handa with a reasonable expectation of success to improve the reliability of the robot work by only accepting suitable instructions. (See at least [0078]) Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Francis, Jr. (US 8452451 B1) in view of Quinlan (US 10427306 B1), Tsukui (US 20200368923 A1), and Stoppi (US 20220410381 A1). Regarding Claim 8, Modified Francis, Jr. does not explicitly teach, but Stoppi teaches wherein the method further comprises: inputting the environment image and the first region information into a preset segmentation model, to obtain a mask corresponding to the environment image; and inputting the mask and depth image information corresponding to the environment image into a preset grabbing model, to obtain the target position. (“using the one or more images to compute an instance segmentation mask (or instance segmentation map) of the objects in the scene. The object pick estimation system may also estimate a depth map (e.g., based on capturing stereo images of a scene and computing depth from stereo). The instance segmentation mask identifies one or more instances of objects that appear in the images. The object pick estimation system computes pickability scores for the detected objects (using the instance segmentation mask and the depth map, if available) and an object is selected based on the pickability score. The system then computes a picking plan (e.g., coordinates of surfaces of the object that can be grasp and a direction along which to approach the object) for the robotic arm to pick the selected object. This picking plan may then be supplied to a robotic controller, which computes a motion plan to guide the robotic arm to this position and to pick up the object.” See at least [0059]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Francis, Jr. to further include the teachings of Stoppi with a reasonable expectation of success to improve target object recognition for robot picking when the objects are in a disorganized environment. (See at least [0002-0004] and [0058-0059]) Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Francis, Jr. (US 8452451 B1) in view of Ito (US 20240367314 A1). Regarding Claim 9, Francis, Jr. does not explicitly teach, but Ito teaches wherein the method further comprises: obtaining a sample task; obtaining, based on a type of the sample task, corresponding sample interactive information and a sample image corresponding to the sample interactive information, wherein the sample interactive information comprises: sample interactive content and a sample task execution result; (“The instruction data accumulation unit 22 accumulates, as instruction data, the instruction information 30 input from outside of the robot system 1000. The instruction information 30 is information including a language instruction corresponding to an operation taught to the robot in a one-to-one manner. … The training data accumulation unit 23 accumulates, as the training data, information in which the robot sensor data (the sensor information) and the captured image acquired from the robot sensor data accumulation unit 21 and the instruction data (the instruction information 30) acquired from the instruction data accumulation unit 22 are associated with each other in a one-to-one manner. In the present specification, a summary of the robot sensor data, the captured image, and the instruction data is referred to as the “training data”.” See at least [0049-0050]) performing word processing on the sample task and the sample interactive content by using an initial word processing unit in an initial interaction model, to obtain a corresponding word vector sequence; (“The instruction information 30 input to the instruction learning unit 412 may be formed in various data formats. For example, in the case of text information, the instruction data accumulation unit 22 may convert the whole text into a certain vector or may divide the whole text into word units and then convert the word units into a vector to be input.” See at least [0154]) encoding the sample image by using an initial visual encoding unit in the initial interaction model, to obtain an image feature sequence corresponding to the sample image; analyzing the word vector sequence and the image feature sequence by using an initial transformation model in the initial interaction model, to obtain a corresponding task prediction result; (“the inference unit 44 uses the weighted model to identify the object to be operated by the robot 10 from the captured image of the newly imaged object, and predicts a drive command based on the newly measured measurement information. Further, the inference unit 44 predicts the intention of the instruction information 30 by applying, to the model, the newly measured measurement information (the robot sensor data), the newly imaged captured image, and the newly input instruction information 30 to the robot 10, and infers a target operation of the robot 10 and an object that is the target of the target operation. Further, the inference unit 44 infers prediction instruction information 4121 (see FIG. 3 to be described later) based on the read instruction information 30. In addition, the inference unit 44 predicts a drive command for causing the robot 10 to perform the target operation according to the intention of the predicted instruction information 30” See at least [0059-0060]; “The learning device 40 according to the first embodiment described above learns the relationship between the abstract instruction and the operation of the robot 10 for the instructed object. Therefore, even in a situation in which the instruction information 30 to the input robot is not clear and the object to be operated by the robot is unclear, the learning device 40 can predict the intention of the instruction and cause the robot 10 to perform any operation. In this way, even if the instruction is an ambiguous instruction that is difficult to recognize only by image recognition, the learning device 40 can implement the manipulation operation of the robot 10 by predicting one probable object among a plurality of objects and enabling trajectory planning for the object.” See at least [0138]; Also see at least [0068-0069])) and training the initial interaction model based on the sample task execution result and the task prediction result, to obtain the target interaction model. (“The learning unit 42 learns, using the model read from the model definition unit 41 according to the target operation of the robot 10, the measurement information (the robot sensor data) in which the operation of the robot 10 is measured, the captured image in which an object to be operated by the robot 10 is imaged, and the instruction information 30 including an instruction word in which the operation instruction to the robot 10 is verbalized. The learning unit 42 can read the model from the model definition unit 41 and switch the model to be used for learning depending on the task executed by the robot 10. The weight storage unit 43 stores a weight and a bias for the model learned by the learning unit 42. The weight is, for example, a value added to a parameter (an optimal parameter) when the learning unit 42 learns using the training data and optimizes the model.” See at least [0055-0057]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Francis, Jr. to further include the teachings of Ito with a reasonable expectation of success to train a machine learning model such that “even in a situation in which an instruction input to a robot is not clear and an object to be operated by the robot is unclear, it is possible to cause the robot to perform any target operation which is inferred by predicting an intention of the instruction.” (See at least [0012]) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yamamoto (US 20210178598 A1) is pertinent because it discusses obtaining an image of the environment and a drawing interaction from a user to command the robot to grasp an object. Rajkumar (US 20200324407 A1) is pertinent because it discusses determining whether the robot can recognize the identified object in the command. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Karston G Evans whose telephone number is (571)272-8480. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571)270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KARSTON G. EVANS/Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Oct 18, 2024
Application Filed
Mar 02, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602054
CONTROL DEVICE FOR MOBILE OBJECT, CONTROL METHOD FOR MOBILE OBJECT, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12600037
REMOTE CONTROL ROBOT, REMOTE CONTROL ROBOT CONTROL SYSTEM, AND REMOTE CONTROL ROBOT CONTROL METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12589493
INFORMATION PROCESSING APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12566457
BULK STORE SLOPE ADJUSTMENT VIA TRAVERSAL INCITED SEDIMENT GRAVITY FLOW
2y 5m to grant Granted Mar 03, 2026
Patent 12552023
METHOD FOR CONTROLLING A ROBOT, AND SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
91%
With Interview (+21.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 143 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month