Prosecution Insights
Last updated: April 19, 2026
Application No. 18/273,817

ROBOT SYSTEM AND CONTROL APPARATUS

Non-Final OA §103
Filed
Jul 24, 2023
Examiner
TRAN, ALYSE TRAMANH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
20 granted / 26 resolved
+24.9% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
51
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to Application No. 18/273,817, filed on 24-NOV-2025. Claims 1, 3, 5, and 6 are currently pending and have been examined. Claims 2 and 5 are cancelled. Claims 1, 3, 5, and 6 have been rejected as follows. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 24-NOV-2025 has been entered. Response to Amendment The amendment filed on 24-NOV-2025 has been entered. Claims 1, 3, 5, and 6 remain pending in the application. Response to Arguments Applicant’s arguments, filed 24-NOV-2025, with respect to the rejections of claims under 103 have been fully considered. The claim amendments change the scope of the rejection and a new rejection in light of Shibata et al. (US 20230162371 A1) and Kuzhinjedathu et al. (US 11845191 B1) can be seen below. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations is/are: "feature point extraction unit..." of claims 2, 3, 5, 6; "container opening identification unit..." of claims 2, 4, 5, 6; "robot control unit..." of claims 2, 5, 6; "feature point position identification unit..." of claims 2, 5, 6; "center position calculation unit..." of claims 4, 5; "corner position identification unit..." of claims 4, 5; "storage unit..." of claims 5; "reception unit..." of claims 6. Because this/these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The applicant’s specification states, Page [6], “When executing the correction program, the processor 41 functions as a feature point extraction unit for extracting a feature point (marker 81, 82, 83) from the container image, a feature point position identification unit for identifying the three-dimensional position of the feature point (marker 81, 82, 83) from the point cloud data, a center position calculation unit for calculating the three-dimensional position of a center point 65 of the container 60 based on the three-dimensional position of the feature point (marker), a corner position identification unit for scanning the point cloud data from the three-dimensional position of the feature point (marker 81, 82, 83) toward the three-dimensional position of the center point 65 of the container 60 and identifying the three-dimensional position of a corner of the opening 63 of the container 60 (the three-dimensional position of a corner of the inner wall of the container 60), and a container opening identification unit for identifying the position, orientation, and size of the opening 63 of the container 60 based on the three-dimensional position of the corner of the opening 63 of the container 60”, Page [6], “The processor 41 functions as a robot control unit for controlling the robot arm mechanism 20 when executing the work program”, Page [7], “The container opening identification apparatus 43 identifies the position, orientation, and size of the opening 63 of the container 60 based on image data and point cloud data relating to the container 60 received from the three-dimensional sensor 30”, Page [13], “The storage device 45 of the control apparatus 40 stores container information”, and depicts the algorithms for these function in Figures 2, 4-7. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 5, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Simon (US 2021/0114826 A1) in view Shibata et al. (US 20230162371 A1) in further view of Kuzhinjedathu et al. (US 11845191 B1). Regarding claim 1, Simon teaches: A robot system for taking out a workpiece stored in a container with an opening at a top of the container (element 148, SPAL; Paragraph [32], "In accordance with the embodiments, shipping cases for case units (e.g. cartons, barrels, boxes, crates, jugs, or any other suitable device for holding case units) may have variable sizes and may be used to hold case units in shipping and may be configured so they are capable of being palletized for shipping"; It should be noted that this limitation is written in the form of merely an intended use), wherein a plurality of markers are disposed at a plurality of locations of the container (Figure 12, 14; element 1415) and the robot system comprising: a robot arm mechanism (element 12) to which a hand for gripping the workpiece is attached (element 99, 800); a sensor configured to generate image data (Paragraph [56], “any suitable three-dimensional image sensor configured to generate one or more of a two-dimensional image”) and point cloud data of the container (Paragraph [56], “and a three-dimensional point cloud”)… the point cloud data is a set of three-dimensional coordinate information related to a surface of the container (Figure 12; element 1270; Paragraph [76], “single point cloud 1270”; Paragraph [60], “As described herein, the vision system 310 provides (or otherwise effects determination of) the position (X, Y, Z in the robot coordinate system or reference frame) and/or orientation (RX, RY, RZ in the robot coordinate system or reference frame) of the top pallet layer (such as layer 816)”)…, and the point cloud data is expressed in three-dimensional coordinates (X, Y, Z) (Paragraph [60], “As described herein, the vision system 310 provides (or otherwise effects determination of) the position (X, Y, Z in the robot coordinate system or reference frame) and/or orientation (RX, RY, RZ in the robot coordinate system or reference frame) of the top pallet layer (such as layer 816)”), … and a control apparatus (element 10C) configured to extract the plurality of markers from the image data by image processing (Paragraph [67], “and generate from the images (based on the common base reference frame) the planar orientation and location (in the six degrees of freedom—pose and orientation) of the identified corners C1-C36”), … identify a position (element 10C; Paragraph [68]), orientation (element 1435; Paragraph [72]), and size of the opening of the container (element 1435; Paragraph [76], "With the cell controller 10C, the pose PSV3 and size (length L and width W) of the pallet layer 816 is determined (FIG. 14, Block 1435)") based on the three-dimensional coordinates (X, Y, Z) of the plurality of markers (Figure 14), and control the robot arm mechanism so as not to interfere with the container based on the identified position, orientation, and size of the opening of the container (Paragraph [62]). While Simon teaches the limitations stated above, it does not expressly teach: wherein the image data contain color information related to the container the image data is expressed in two-dimensional coordinates (X, Y) two- dimensional coordinate components (X, Y) of the three-dimensional coordinates (X, Y, Z) in the point cloud data corresponding to the two-dimensional coordinates (X, Y) of the image data identify from the point cloud data, three-dimensional coordinates (X, Y, Z) corresponding to two-dimensional coordinates (X, Y) of the plurality of markers extracted from the image data to convert the two-dimensional coordinates (X, Y) of the plurality of markers into three-dimensional coordinates (X, Y, Z) of the plurality of markers However, Shibata et al. teaches: the image data is expressed in two-dimensional coordinates (X, Y) (Figure 1, element 12, Figure 4, element X, Z axis; A personal having ordinary skill in the art would understand that the axis label is arbitrary, as it is still two dimensional coordinate system) … two- dimensional coordinate components (X, Y) of the three-dimensional coordinates (X, Y, Z) in the point cloud data corresponding to the two-dimensional coordinates (X, Y) of the image data (Figure 5, element S102) … identify from the point cloud data, three-dimensional coordinates (X, Y, Z) corresponding to two-dimensional coordinates (X, Y) of the plurality of markers extracted from the image data (Figure 4, 5, element S105) to convert the two-dimensional coordinates (X, Y) of the plurality of markers into three-dimensional coordinates (X, Y, Z) of the plurality of markers (Figure 4; Paragraph [104], “and the label determination unit 205 configured to determine the label for each point in the mapped point cloud based on voting results from the matching result voting unit 204”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the depalletizer that utilizes imaging and 3D point data of Simon, to include the image processing for mapping 3D point data to images for object detection, as taught by Shibata et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a depalletizer that utilizes mapping 3D point data to images. While Simon and Shibata et al. teaches the limitations stated above, it does not expressly teach: wherein the image data contain color information related to the container However, Kuzhinjedathu et al. teaches: wherein the image data contain color information related to the container ([Col 9, lines 34-38], “In some examples, the RGB image 516 is accessed directly from an image sensor such as the image sensor 104 without any pre-processing. The RGB image 516, while illustrated in black and white lines, may also include colors and additional graphical details.”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the depalletizer that utilizes mapping 3D point data to images of Simon and Shibata et al., to include the images being RGB images that include color details, as taught by Kuzhinjedathu et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a depalletizer that utilizes mapping 3D point data to color detailed RGB images. Regarding claim 3, Simon teaches: The robot system according to claim 1, wherein the opening has a rectangular shape (Figure 12), and the control apparatus is configured to extract, based on the image data, three markers of the plurality of markers respectively provided at three of four corners of the container (Figure 12; Paragraph [57-58, 75]) Regarding claim 4, Simon teaches: The robot system according to claim 1, wherein the control apparatus is configured to: calculate a three-dimensional position of a center point of the container based on three-dimensional positions of three markers of the plurality of markers (element 816C; Paragraph [79]), scan the point cloud data from the three- dimensional positions of the three markers toward the three-dimensional position of the center point (Figure 3; Paragraph [56-57], “For example, in one aspect, the cameras 310C1-310C3 on level 387L1 are pointed at three corners while the fourth corner does not have a corresponding camera)”), to identify three-dimensional positions of three corners of the opening of the container (Paragraph [76]), and identify the position (Paragraph [68]), orientation (element 1435; Paragraph [72]), and size of the opening of the container (element 1435; Paragraph [76], "With the cell controller 10C, the pose PSV3 and size (length L and width W) of the pallet layer 816 is determined (FIG. 14, Block 1435)") based on the three-dimensional positions of the three corners of the opening of the container (Figure 14). Regarding claim 6, Simon teaches: A control apparatus for controlling a robot arm mechanism for gripping a workpiece stored in a container with an opening at a top of the container (Figure 3; element 148, SPAL; Paragraph [32], "In accordance with the embodiments, shipping cases for case units (e.g. cartons, barrels, boxes, crates, jugs, or any other suitable device for holding case units) may have variable sizes and may be used to hold case units in shipping and may be configured so they are capable of being palletized for shipping"), based on an output of a sensor configured to photograph the container (element 310; Paragraph [56]), the control apparatus comprising: a reception unit configured to receive (element 10C; Paragraph [63]), from the sensor (Figure 12; element 1270; Paragraph [76], “single point cloud 1270”; Paragraph [60], “As described herein, the vision system 310 provides (or otherwise effects determination of) the position (X, Y, Z in the robot coordinate system or reference frame) and/or orientation (RX, RY, RZ in the robot coordinate system or reference frame) of the top pallet layer (such as layer 816)”), image data (Paragraph [56], “any suitable three-dimensional image sensor configured to generate one or more of a two-dimensional image”) and point cloud data of the container (Paragraph [56], “and a three-dimensional point cloud”)… the point cloud data is a set of three-dimensional coordinate information representing a surface of the container (Figure 12; element 1270; Paragraph [76], “single point cloud 1270”; Paragraph [60], “As described herein, the vision system 310 provides (or otherwise effects determination of) the position (X, Y, Z in the robot coordinate system or reference frame) and/or orientation (RX, RY, RZ in the robot coordinate system or reference frame) of the top pallet layer (such as layer 816)”)…, and the point cloud data is expressed in three-dimensional coordinates (X, Y, Z) (Paragraph [60], “As described herein, the vision system 310 provides (or otherwise effects determination of) the position (X, Y, Z in the robot coordinate system or reference frame) and/or orientation (RX, RY, RZ in the robot coordinate system or reference frame) of the top pallet layer (such as layer 816)”),…; a feature point extraction unit configured to extract a plurality of markers from the image data by image processing (Figure 14; element 1415; Paragraph [56], “The at least one camera 310C is (or includes) any suitable three-dimensional image sensor configured to generate one or more of a two-dimensional image”; Paragraph [75], “Reference datum(s) of the pallet layer are determined based on the image data (FIG. 14, Block 1415) in the camera 310C reference frame and/or the robot reference 14 reference frame. The reference datum(s) of the pallet are any suitable geometric features of the pallet (e.g., corners of the pallet layer, corners of case units in the pallet layer, outermost sides of the pallet layer, vertices of the outermost sides, orthogonality of the outermost sides, position of the sides, etc.)… In one aspect, the corners PC1-PC4 of the pallet layer are determined from the image data of each of the at least one camera 310C separately”); a feature point position identification unit (Figure 14; element 1420, 1425, 1430; Paragraph [75], “while in other aspects the image data from the cameras may optionally be combined (FIG. 14, Block 1420) for determining the corners of the pallet layer. For example, where the image data from the at least one camera 310C are combined, a single point cloud 1270 of at least part of the pallet load PAL including the pallet layer 816 is generated by combining, with the cell controller 10C, the image data from each of the at least one camera 310C”; Paragraph [76], “the location of the corners PC1-PC4 of the pallet layer 816 in the robot reference frame may optionally be verified by determining the location of the corners PC1-PC4 (FIG. 14, Block 1430) with the cell controller 10C by projecting the single point cloud 1270 onto the plane 1200”)…; a container opening identification unit configured to identify a position (element 10C; Paragraph [68]), orientation (element 1435; Paragraph [72]), and size of the opening of the container (element 1435; Paragraph [76], "With the cell controller 10C, the pose PSV3 and size (length L and width W) of the pallet layer 816 is determined (FIG. 14, Block 1435)") based on the three-dimensional coordinates (X, Y, Z) of the plurality of markers (Figure 14); and a robot control unit configured to control the robot arm mechanism based on the position, orientation, and size of the opening of the container so as not to interfere with the container based on the identified position, orientation, and size of the opening of the container (Paragraph [62]). While Simon teaches the limitations stated above, it does not expressly teach: wherein the image data contain color information related to the container the image data is expressed in two-dimensional coordinates (X, Y) two- dimensional coordinate components (X, Y) of the three-dimensional coordinates (X, Y, Z) in the point cloud data corresponding to the two-dimensional coordinates (X, Y) of the image data identify from the point cloud data, three-dimensional coordinates (X, Y, Z) corresponding to two-dimensional coordinates (X, Y) of the plurality of markers extracted from the image data to convert the two-dimensional coordinates (X, Y) of the plurality of markers into three-dimensional coordinates (X, Y, Z) of the plurality of markers However, Shibata et al. teaches: the image data is expressed in two-dimensional coordinates (X, Y) (Figure 1, element 12, Figure 4, element X, Z axis; A personal having ordinary skill in the art would understand that the axis label is arbitrary, as it is still two dimensional coordinate system) … two- dimensional coordinate components (X, Y) of the three-dimensional coordinates (X, Y, Z) in the point cloud data corresponding to the two-dimensional coordinates (X, Y) of the image data (Figure 5, element S102) … identify from the point cloud data, three-dimensional coordinates (X, Y, Z) corresponding to two-dimensional coordinates (X, Y) of the plurality of markers extracted from the image data (Figure 4, 5, element S105) to convert the two-dimensional coordinates (X, Y) of the plurality of markers into three-dimensional coordinates (X, Y, Z) of the plurality of markers (Figure 4; Paragraph [104], “and the label determination unit 205 configured to determine the label for each point in the mapped point cloud based on voting results from the matching result voting unit 204”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the depalletizer that utilizes imaging and 3D point data of Simon, to include the image processing for mapping 3D point data to images for object detection, as taught by Shibata et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a depalletizer that utilizes mapping 3D point data to images. While Simon and Shibata et al. teaches the limitations stated above, it does not expressly teach: wherein the image data contain color information related to the container However, Kuzhinjedathu et al. teaches: wherein the image data contain color information related to the container ([Col 9, lines 34-38], “In some examples, the RGB image 516 is accessed directly from an image sensor such as the image sensor 104 without any pre-processing. The RGB image 516, while illustrated in black and white lines, may also include colors and additional graphical details.”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the depalletizer that utilizes mapping 3D point data to images of Simon and Shibata et al., to include the images being RGB images that include color details, as taught by Kuzhinjedathu et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a depalletizer that utilizes mapping 3D point data to color detailed RGB images. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSE TRAMANH TRAN whose telephone number is (703)756-5879. The examiner can normally be reached M-F 8:30am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached on 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.T.T./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Apr 18, 2025
Non-Final Rejection — §103
Jul 09, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103
Nov 24, 2025
Request for Continued Examination
Dec 04, 2025
Response after Non-Final Action
Jan 01, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569994
ROBOT APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12566071
METHOD OF ROUTE PLANNING AND ELECTRONIC DEVICE USING THE SAME
2y 5m to grant Granted Mar 03, 2026
Patent 12544826
BINDING DEVICE, BINDING SYSTEM, METHOD FOR CONTROLLING BINDING DEVICE, AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM
2y 5m to grant Granted Feb 10, 2026
Patent 12539613
DETECTION AND MITIGATION OF PREDICTED COLLISIONS OF OBJECTS WITH USER CONTROL SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12515321
Method for Generating a Training Dataset for Training an Industrial Robot
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+50.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month