Prosecution Insights
Last updated: April 19, 2026
Application No. 18/072,863

SYSTEMS AND METHODS FOR OBJECT DETECTION AND PICK ORDER DETERMINATION

Non-Final OA §102§103§112
Filed
Dec 01, 2022
Examiner
KARWAN, SIHAR A
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Boston Dynamics Inc.
OA Round
3 (Non-Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
82%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
215 granted / 385 resolved
+3.8% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
41 currently pending
Career history
426
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
27.8%
-12.2% vs TC avg
§102
33.4%
-6.6% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 385 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendments to the claims have been recorded. Response to Arguments Applicant’s arguments have been fully considered but they are not persuasive. Applicant’s Arguments Applicant argues are fully addressed with the new rejections made to the newly provided amendments. Additionally, arguments relating to 2D matching are fully addressed with the remarks made to the amendments. Choi para 56 has been added to add additional clarity and address Applicants arguments fully. Choi 56; the image processor 106 provides a multi-step process by which 2D images are interpreted to determine the location and orientation of objects, such as transparent vessels. Also see 78; 80; which teaches 2d boundary boxes, the 2d boundary boxes relate to 2D objects as a single camera is used as opposed to 3D bounding boxes which Choi also teaches in para 241. DETAILED ACTION Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 12 recites the limitation "the 3D model of objects ". There is insufficient antecedent basis for this limitation in the claim. One of ordinary skill in the art would not know with “object” is a new object or an object that was previously introduced. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 14-26, and 28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Choi US 20200306980. Claim 1-13 and 27 subject to 101 rejections. Generating a model i.e. in memory. 1-13 and 27 withdrawn. 14. A robotic device, comprising: a robotic arm having disposed thereon, a suction-based gripper configured to grasp a target object; 59; robotic arm and gripper adapted to pick up objects by suction gripper. a perception system configured to capture one or more images of a plurality of two- dimensional (2D) object faces of objects in an environment of the robotic device; and Fig. 4B 104a and b. at least one computing device configured to: determine based, at least in part, on the captured one or more images, whether each of the plurality of 2D object faces matches a prototype object of a set of prototype objects stored in a memory of the robotic device, wherein each of the prototype objects in the set includes information for the prototype object; 90; the machine learning [memory] model may be trained to identify [match] objects in an image [2D rotating image to 3D Fig, 4A] by capturing images of many different arrangements of objects. [images are captured in 2D and stored in 2D and matched in 2D] Also 56; the image processor 106 provides a multi-step process by which 2D images are interpreted to determine the location and orientation of objects, such as transparent vessels. generate a 3D model of the 3D information included in one or more of the prototype objects in the set of prototype objects that was determined to match one or more of the 2D object faces; 107-108; the object configuration classification model determines whether the objects in the cluster match a predefined set of configurations. the same model is used to perform the object and cluster detection (steps 604-608) and the object configuration classification (steps 610-612). The CNN may output may output confidence scores for a plurality of categories that the CNN is trained to recognize. The object configuration classification model may be an encoder-decoder architecture based on a CNN that generates instance or class segmentation masks [3D shape]. select based, at least in part, on the generated 3D model, one of the objects in the environment as a target object; and 108; the object configuration classification model may be programmed to perform the functions ascribed to it for objects [target] that are transparent or translucent. control the robotic arm to grasp the target object. 154; Referring again to FIG. 10A, the method 1000 may further include selecting 1006 grasping parameters for the robotic arm 110 and gripper 112 (or other actuator and end effector) according to the position, width, and height of the object or oriented 2D bounding box from step 1004 as determined from one or more images alone or using size data based on classification of an object represented in the one or more images. 15. The robotic device of claim 14, wherein the at least one computing device is further configured to: determine that a first 2D object face of the plurality of 2D object faces does not match any prototype object in the set of prototype objects; 269; does not result in the vessel determined 3912 to be empty according to subsequently captured images, an alert may be generated. create a new prototype object for the first 2D object face that does not match any prototype object in the set; and 90; the machine learning model may be trained to identify objects in an image by capturing images of many different arrangements of objects (e.g., cups, plates, bowls, etc. with or without food and other debris) on a surface, such as a tray. Each arrangement of objects may be captured from many angles. one or multiple cameras at different views. Fig.4b add the new prototype object to the set of prototype objects. 90; For example, the machine learning model may be trained [saved to memory] to identify objects in an image by capturing images of many different arrangements of objects (e.g., cups, plates, bowls, etc. with or without food and other debris) on a surface, such as a tray 16. The robotic device of claim 15, wherein the at least one computing device is further configured to: control the robotic arm to pick up the object associated with the first 2D object face; and 59; adapted to pick up objects; Also Fig.4A 2D vs Fig.4B 3D. control the perception system to capture one or more images of the picked-up object, wherein the one or more images include at least one face of the object other than the first 2D object face, and 59; adapted to pick up objects; Also Fig.4A 2D vs Fig.4B 3D. wherein the new prototype object is created based, at least in part, on the captured one or more images of the picked-up object. Fig. 13, 90 and 158; Some or all of steps 1008, 1010, and 1012 may be performed with feedback from the cameras 104 to verify that the fingers 114a, 114b of the gripper 112 are in fact positioned around the vessel and that the vessel does become gripped within the fingers 114a, 114b and remains gripped at least for some portion of the movement off of the surface 400. 17. The robotic device of claim 16, wherein the at least one computing device is further configured to control the robotic arm to rotate the picked-up object prior to capturing the one or more images of the picked-up object by the perception system. 158; step 1014 may further include invoking rotation of the gripper in order to invert the vessel. 18. The robotic device of claim 14, further comprising: a user interface configured to enable a user to provide user input describing prototype objects to include in the set of prototype objects, wherein the at least one computing device is further configured to: 90; A human operator then evaluates the images and draws polygons around each object present in the image, including partially occluded objects. populate the set of prototype objects with prototype objects based on the user input. [0122] The object configuration classifier may be trained using either a manual or automated approach. For a manual approach, objects can be manually placed in poses (by a human) which are then iteratively captured by one or multiple cameras at different views. 19. The robotic device of claim 14, wherein the at least one computing device is further configured to: receive, from a computing system, input describing prototype objects to include in the set of prototype objects; and 90; A human operator then evaluates the images and draws polygons around each object present in the image, including partially occluded objects. populate the set of prototype objects with prototype objects based on the user input. [0122] The object configuration classifier may be trained using either a manual or automated approach. For a manual approach, objects can be manually placed in poses (by a human) which are then iteratively captured by one or multiple cameras at different views. 20. The robotic device of claim 14, wherein the at least one computing device is further configured to: determine a set of pickable objects based, at least in part, on the generated 3D model of Fig. 19 group 400 select a target object from the set of pickable objects; and Fig.19 P from group 400. control the robotic arm to grasp the target object. Fig. 19, 112 21. The robotic device of claim 20, wherein the at least one computing device is further configured to: determine a desired orientation of the target object; and 154; step 1014 may further include invoking rotation of the gripper in order to invert the vessel, for dish rack. place the target object in the desired orientation at a target location. 158; placing the vessel on a dish rack for cleaning. 22. The robotic device of claim 21, wherein determining the desired orientation of the target object is based, at least in part, on the target location.158; step 1014 may further include invoking rotation of the gripper in order to invert the vessel, such as when placing the vessel on a dish rack for cleaning. 24. The robotic device of claim 21, wherein the at least one computing device is further configured to: determine a stability estimate associated with placing a side of the target object on a surface, wherein determining the desired orientation of the target object is based, at least in part, on the stability estimate. 136; Oriented 2D bounding boxes estimate an angle of rotation [stability based on center of mass and balance point] for the box relative to the x and y axes of the image, 25. The robotic device of claim 24, wherein determining the stability estimate comprises: calculating a ratio of dimensions of the side of the target object; and 136-139; 136; Typical 2D bounding boxes are rectangles of different sizes and aspect ratios. Their edges are parallel to the x-axis and y-axis of the images M1, M2. determining the stability estimate based, at least in part, on the ratio. 136; Oriented 2D bounding boxes estimate an angle of rotation [stability based on center of mass and balance point] for the box relative to the x and y axes of the image, 26. The robotic device of claim 21, wherein the at least one computing device is further configured to: control the robotic arm to orient the target object based on the desired orientation.158; step 1014 may further include invoking rotation of the gripper in order to invert the vessel, such as when placing the vessel on a dish rack for cleaning. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Choi as applied to claim above, and further in view of Tang US2022/0383538. 23. Choi teaches all of the limitations of claim 22 but does not teach, wherein the target location includes a conveyor, and wherein determining the desired orientation of the target object comprises determining to align a longest axis of the target object with a length dimension of the conveyor. However, Tang US2022/0383538 para 21 teaches it is desirable to orient the objects 16 in a certain manner, such as align the objects 16 in the same direction [longest axis], for example, on the conveyor 24, that will require the robot 12 to turn or rotate the object 16 after it is picked up. For these types of robotic systems it is not only necessary to determine the center of the object 16 to be picked up, but it is also necessary to determine the orientation of the object 16 being picked up so that the robot 12 can rotate the object 16 and align it with the desired orientation when the robot 12 places the object 16 on the conveyor 24. Thus, all of the objects 16 can be aligned in the same direction on the conveyor 24 or even placed standing up on the conveyor 24. It is noted that determining the orientation of the object 16 requires more complexity than determining just the center of the object 16, and as such requires significantly more neural network training. Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to place objects with rotation compensation such that the claimed invention as a whole would have been obvious. 28. The robotic device of claim 14, wherein the one or more 2D object faces includes a first 2D object face and a second 2D object face; Fig. 11A and 11B different angles different faces. [it is noted that all 3D objects have multiple 2D faces W1, W2, …] the set of prototype objects stored in the memory of the robotic device comprises:150; . An entry 1104 corresponding to the classification may be identified 1022 in a database and dimensions corresponding to the classification may be retrieved 1024 from the database 1102, such as a database storing dimensions of different classes of cups as shown in FIG. 11C. Also; 137; A machine learning model may be trained [stored in memory] a first prototype object including first 3D information for the first prototype object; Fig. 11C #1102 first cup. [Also, 11A cups can be the same shape but different object i.e. multiple cups] a second prototype object including second 3D information for the second prototype object, Fig. 11C #1102 second cup. [could be different objects same shape] the second 3D information being different from the first 3D information; and Fig. 11C #1102 first and second cups are different. the 3D model of objects is generated using the first 3D information and the second 3D information based on the first prototype object being determined to match the first 2D object face and the second prototype object being determined to match the second 2D object face.Fig.19 P is the 3d model of objects generated based on the cups matching 2D face of cups in fig.7 and 11. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIHAR A KARWAN whose telephone number is (571)272-2747. The examiner can normally be reached on M-F 11am.-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIHAR A KARWAN/Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Dec 01, 2022
Application Filed
May 29, 2025
Examiner Interview (Telephonic)
May 31, 2025
Examiner Interview Summary
Jun 03, 2025
Non-Final Rejection — §102, §103, §112
Aug 08, 2025
Interview Requested
Aug 21, 2025
Applicant Interview (Telephonic)
Aug 21, 2025
Examiner Interview Summary
Sep 05, 2025
Response Filed
Sep 18, 2025
Final Rejection — §102, §103, §112
Jan 14, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589502
CARGO-HANDLING APPARATUS, CONTROL DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12589750
VEHICULAR CONTROL SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12589504
SYSTEM AND METHOD FOR COGNITIVE SURVEILLANCE ROBOT FOR SECURING INDOOR SPACES
2y 5m to grant Granted Mar 31, 2026
Patent 12583100
ROBOT TO WHICH DIRECT TEACHING IS APPLIED
2y 5m to grant Granted Mar 24, 2026
Patent 12576516
HUMAN SKILL BASED PATH GENERATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
82%
With Interview (+25.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 385 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month