Prosecution Insights
Last updated: April 19, 2026
Application No. 18/314,714

INFORMATION PROCESSING METHOD, IMAGE PROCESSING METHOD, ROBOT CONTROL METHOD, PRODUCT MANUFACTURING METHOD, INFORMATION PROCESSING APPARATUS, IMAGE PROCESSING APPARATUS, ROBOT SYSTEM, AND RECORDING MEDIUM

Non-Final OA §102§112
Filed
May 09, 2023
Examiner
MANCHO, RONNIE M
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Canon Kabushiki Kaisha
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
79%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
729 granted / 963 resolved
+23.7% vs TC avg
Minimal +3% lift
Without
With
+3.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
42 currently pending
Career history
1005
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
26.3%
-13.7% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
32.1%
-7.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-17, 19-25 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites, “training a machine learning model using the first image data and the second image data as input data”. Applicant’s specification no where recites “training”. Examiner checked for other words or phrases indicating the process, “training a machine learning model using the first image data and the second image data as input data”. Applicant’s disclosure is silent regarding the process, “training a machine learning model using the first image data and the second image data as input data”. This is new matter. The rest of the claims are rejected for depending on a rejected base claim or for having similar deficiencies as the rejected base claim. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 3. Claims 1-17, 19-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 in the first paragraph recites, “a trained machine learning model…” and then again in the last paragraph claim 1 recites, “training a machine learning model” It is not clear whether the claimed, “…machine learning model” are the same or not the same units. Applicant is therefore requested to provide proper antecedent for the recited limitation. The rest of the clams are rejected for depending on a rejected base claim or for having deficiencies similar to the rejected base claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7 Claims 1-17, 19-25 are rejected under 35 U.S.C. 102(a)1) as being anticipated by Li (US pub 2023/0297068). Regarding claim 1, Li discloses an information processing method for obtaining a trained machine learning model (i.e. trained model, training model, training data, teacher data; abstract; sec 0007,0008, 0009, 0010, 0028, 0029, 0048, etc) configured to output information of a workpiece (citing, “generation of a trained model for specifying the pick-up positions of the workpieces loaded in bulk….,”; sec 0007, 0009, 0010, 0028, 0029, 00484, etc ), the information processing method comprising: obtaining first image data (obtaining image data of different types of work pieces; sec 0039, 0040, 0046) including an image corresponding to a first number of workpieces disposed in a container (obtaining first image data of different types of work pieces disposed in a container; figs; 1, 4, 5, 6, 8. 10; sec 0039; 0040, 0046; 0050), each of the first number of workpieces being disposed on an inner bottom surface of the container without overlapping with each other workpiece of the first number of workpieces (fig. 6 a-d each shows images of a group of workpieces without overlapping; figs. 6a-d also shows images of a group of work pieces with overlapping; see sec 0087, citing, “……..there is a probability that these position and posture candidates are not exposed in the overlapping state of the plurality of workpieces 50…”, and also citing, “….postures of the plurality of workpieces 50 without the interference with a surrounding obstacle in the overlapping state of the plurality of workpieces 50…”); obtaining second image data including an image corresponding to a second number of workpieces disposed in the container, each of the second number of the workpieces being disposed in the container and overlapping with at least one other workpiece of the second number of workpieces (sec 0029, 0089-0091); and training (fig. 7, step S18; sec 0108) a machine learning model using the first image data and the second image data as input data (sec 0096-0098, 0108-0113). Regarding claim 2, Li discloses the information processing method according to claim 1, wherein a plurality of pieces of the first image data and a plurality of pieces of the second image data are obtained (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154), and the trained machine learning model is obtained by machine learning using the plurality of pieces of the first image data and the plurality of pieces of the second image data as the input data (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 3, Li discloses the information processing method according to claim 2, further comprising determining, on a basis of a predetermined algorithm, the number of pieces of the first image data and the number of pieces of the second image data that are to be obtained (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154), Regarding claim 4, Li discloses the information processing method according to claim 1, wherein the first image data and the second image data each include image data obtained on a basis of an image pickup operation by an image pickup apparatus (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 5, Li discloses the information processing method according to claim 4, wherein a plurality of pieces of the first image data are obtained while changing a distance between the image pickup apparatus and an inner bottom surface of the container (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 6, Li discloses the information processing method according to claim 1 further comprising: obtaining third image data (obtaining image data of different types of work pieces; sec 0039, 0040, 0046, 0055-0058) including an image corresponding to a third number of virtual workpieces disposed in a virtual container (obtaining first image data of different types of work pieces disposed in a container; figs; 1, 4, 5, 6, 8. 10; sec 0039; 0040, 0046; 0050), each of the third number of virtual workpieces being disposed on an inner bottom surface of the virtual container without overlapping with each other workpiece of the third number of virtual workpieces (fig. 6 a-d each shows images of a group of workpieces without overlapping; figs. 6a-d also shows images of a group of work pieces with overlapping; see sec 0087, citing, “……..there is a probability that these position and posture candidates are not exposed in the overlapping state of the plurality of workpieces 50…”, and also citing, “….postures of the plurality of workpieces 50 without the interference with a surrounding obstacle in the overlapping state of the plurality of workpieces 50…”); obtaining fourth image data including an image corresponding to a fourth number of virtual workpieces disposed in the virtual container, each of the fourth number of the virtual workpieces being disposed in the virtual container and overlapping with at least one other workpiece of the fourth number of virtual workpieces (sec 0029, 0089-0091); and training a second machine learning model using the third image data and the fourth image data as input data (sec 0096-0098), 0108-0113), wherein the third image data and the fourth image data each include image data obtained on a basis of a virtual image pickup operation by a virtual image pickup apparatus (sec 0029, 0089-0091, sec 0096-0098), 0108-0113). Regarding claim 7, Li discloses the information processing method according to claim 6, wherein a plurality of pieces of the first image data are obtained while changing a distance between the virtual image pickup apparatus and an inner bottom surface of the virtual container (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 8, Li discloses the information processing method according to claim 6, wherein the first image data is obtained by performing physical simulation in which the first number of the virtual workpieces are caused to free fall into the virtual container and causing the virtual image pickup apparatus to virtually image the first number of the virtual workpieces randomly piled up in the virtual container (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154), and the second image data is obtained by performing physical simulation in which the second number of the virtual workpieces are caused to free fall into the virtual container and causing the virtual image pickup apparatus to virtually image the second number of the virtual workpieces randomly piled up in the virtual container (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 9, Li discloses the information processing method according to claim 6, further comprising displaying, on a display portion, a first input portion capable of receiving input of setting conditions of the virtual image pickup apparatus (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 10, Li discloses the information processing method according to claim 6, further comprising displaying, on a display portion, a second input portion capable of receiving input of setting conditions of the virtual workpieces (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 11, Li discloses the information processing method according to claim 6, further comprising displaying, on a display portion, a third input portion capable of receiving input of setting conditions of the virtual container (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 12, Li discloses the information processing method according to claim 6, wherein the first image data is obtained by virtually lighting up a virtual light source in the virtual image pickup operation by the virtual image pickup apparatus (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 13, Li discloses the information processing method according to claim 12, further comprising displaying, on a display portion, a fourth input portion capable of receiving input of setting conditions of the virtual light source (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 14, Li discloses the information processing method according to claim 1, wherein the information of the workpiece includes information of an orientation of the workpiece (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 15, Li discloses the information processing method according to claim 14, wherein the information of the orientation of the workpiece includes information about which of a front surface and a back surface of the workpiece faces upward (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 16, Li discloses the information processing method according to claim 1, wherein the information of the workpiece includes information of a position of the workpiece (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 17, Li discloses the information processing method according to claim 1, wherein the first number is such a number that a packing ratio of the workpieces in the container or a packing ratio of the virtual workpieces in the virtual container is determined as low, and wherein the second number is such a number that the packing ratio of the workpieces in the container or the packing ratio of the virtual workpieces in the virtual container is determined as high (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 19, Li discloses an image processing method for obtaining a trained machine learning model (i.e. trained model, training model, training data, teacher data; abstract; sec 0007,0008, 0009, 0010, 0028, 0029, 0048, etc) configured to output information of a workpiece (citing, “generation of a trained model for specifying the pick-up positions of the workpieces loaded in bulk….,”; sec 0007, 0009, 0010, 0028, 0029, 00484, etc ), the image information processing method comprising: obtaining first image data (obtaining image data of different types of work pieces; sec 0039, 0040, 0046) including an image corresponding to a first number of workpieces disposed in a container (obtaining first image data of different types of work pieces disposed in a container; figs; 1, 4, 5, 6, 8. 10; sec 0039; 0040, 0046; 0050), each of the first number of workpieces being disposed on an inner bottom surface of the container without overlapping with each other workpiece of the first number of workpieces (fig. 6 a-d each shows images of a group of workpieces without overlapping; figs. 6a-d also shows images of a group of work pieces with overlapping; see sec 0087, citing, “……..there is a probability that these position and posture candidates are not exposed in the overlapping state of the plurality of workpieces 50…”, and also citing, “….postures of the plurality of workpieces 50 without the interference with a surrounding obstacle in the overlapping state of the plurality of workpieces 50…”); obtaining second image data including an image corresponding to a second number of workpieces disposed in the container, each of the second number of the workpieces being disposed in the container and overlapping with at least one other workpiece of the second number of workpieces (sec 0029, 0089-0091); and training (fig. 7, step S18; sec 0108) a machine learning model using the first image data and the second image data as input data (sec 0096-0098, 0108-0113). Regarding claim 20, Li discloses a robot control method comprising: obtaining information of a workpiece from captured image data obtained by imaging the workpiece, the information of the workpiece being obtained by using the trained machine learning model obtained by the information processing method according to claim 1; and controlling a robot on a basis of the information of the workpiece (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 21, Li discloses the product manufacturing method comprising: obtaining information of a workpiece from captured image data obtained by imaging the workpiece, the information of the workpiece being obtained by using the trained machine learning model obtained by the information processing method (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154) according to claim 1; and controlling a robot on a basis of the information of the workpiece to manufacture a product (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 22, Li discloses an information processing apparatus comprising: one or more processors configured to obtain a trained machine learning model configured to output information of a workpiece (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154), wherein the one or more processors: obtain first image data (obtaining image data of different types of work pieces; sec 0039, 0040, 0046) including an image corresponding to a first number of workpieces disposed in a container (obtaining first image data of different types of work pieces disposed in a container; figs; 1, 4, 5, 6, 8. 10; sec 0039; 0040, 0046; 0050), each of the first number of workpieces being disposed on an inner bottom surface of the container without overlapping with each other workpiece of the first number of workpieces (fig. 6 a-d each shows images of a group of workpieces without overlapping; figs. 6a-d also shows images of a group of work pieces with overlapping; see sec 0087, citing, “……..there is a probability that these position and posture candidates are not exposed in the overlapping state of the plurality of workpieces 50…”, and also citing, “….postures of the plurality of workpieces 50 without the interference with a surrounding obstacle in the overlapping state of the plurality of workpieces 50…”); obtain second image data including an image corresponding to a second number of workpieces disposed in the container, each of the second number of the workpieces being disposed in the container and overlapping with at least one other workpiece of the second number of workpieces (sec 0029, 0089-0091); and train (fig. 7, step S18; sec 0108) a machine learning model using the first image data and the second image data as input data (sec 0096-0098, 0108-0113). Regarding claim 23, Li discloses an image processing apparatus comprising: one or more processors configured to obtain a trained machine learning model configured to output information of a workpiece (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154), wherein the one or more processors: obtain first image data (obtaining image data of different types of work pieces; sec 0039, 0040, 0046) including an image corresponding to a first number of workpieces disposed in a container (obtaining first image data of different types of work pieces disposed in a container; figs; 1, 4, 5, 6, 8. 10; sec 0039; 0040, 0046; 0050), each of the first number of workpieces being disposed on an inner bottom surface of the container without overlapping with each other workpiece of the first number of workpieces (fig. 6 a-d each shows images of a group of workpieces without overlapping; figs. 6a-d also shows images of a group of work pieces with overlapping; see sec 0087, citing, “……..there is a probability that these position and posture candidates are not exposed in the overlapping state of the plurality of workpieces 50…”, and also citing, “….postures of the plurality of workpieces 50 without the interference with a surrounding obstacle in the overlapping state of the plurality of workpieces 50…”); obtain second image data including an image corresponding to a second number of workpieces disposed in the container, each of the second number of the workpieces being disposed in the container and overlapping with at least one other workpiece of the second number of workpieces (sec 0029, 0089-0091); and train (fig. 7, step S18; sec 0108) a machine learning model using the first image data and the second image data as input data (sec 0096-0098, 0108-0113). Regarding claim 24, Li discloses a robot system comprising: the information processing apparatus according to claim 22; a robot (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154); and a controller configured to control the robot on a basis of the information of the workpiece (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154). Regarding claim 25, Li discloses a non-transitory computer-readable recording medium storing one or more programs including instructions for causing a computer to execute the information processing method (figs; 1, 4, 5, 6, 8. 10; sec 0009, 0010, 0028, 0029, 0048, 0072, 0073, 0074-0154) according to claim 1. Response to Arguments Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s argument regarding the 101 rejections are moot in view of the rejection being vacated. Regarding applicant’s arguments in view of the drawing the abjections thereto have been vacated in view of applicant’s amendments to the claims. Regarding applicant’s arguments in view of the 112 rejections the 112 rejections of the office action dated 10/29/205 have been vacated in view of applicant’s amendments. However, a new matter 112 rejection have been instated in the present office action. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art, EP 1589483 B1; US 20060104788 A1 each at least teaches or indicates an overlapping state and in addition also indicating a No overlapping state or a state without overlapping of workpieces. Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to RONNIE MANCHO whose telephone number is (571)272-6984. The examiner can normally be reached Mon-Thurs. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at 571 270 5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONNIE M MANCHO/Primary Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

May 09, 2023
Application Filed
May 08, 2025
Non-Final Rejection — §102, §112
Aug 11, 2025
Response Filed
Oct 27, 2025
Final Rejection — §102, §112
Dec 22, 2025
Response after Non-Final Action
Jan 21, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600242
COMPUTER-IMPLEMENTED METHOD OF CONTROLLING FUTURE BRAKING CAPACITY OF A VEHICLE TRAVELLING ALONG A ROAD
2y 5m to grant Granted Apr 14, 2026
Patent 12597350
COLLISION ALERT DEVICE AND COLLISION ALERT METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12594682
WIRE-BODY FIXING MEMBER, WIRE-BODY-EXTENSION FIXING MEMBER, AND WIRE-BODY FITTING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12582490
REAL TIME IMAGE GUIDED PORTABLE ROBOTIC INTERVENTION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12583334
SYSTEMS AND METHODS TO PREDICT AND APPLY REGENERATIVE BRAKING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
79%
With Interview (+3.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month