Prosecution Insights
Last updated: April 19, 2026
Application No. 18/833,790

ESTIMATION DEVICE, ESTIMATION METHOD, AND CONTROL DEVICE

Final Rejection §103
Filed
Jul 26, 2024
Examiner
KHAYER, SOHANA T
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kyocera Corporation
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
241 granted / 292 resolved
+30.5% vs TC avg
Strong +22% interview lift
Without
With
+21.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
35 currently pending
Career history
327
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
28.8%
-11.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 292 resolved cases

Office Action

§103
DETAILED ACTION Remarks This final office action is in response to the amendments filled on 02/04/2026. Claims 1-14 and 16 are amended. Claim 15 is canceled. Claims 1-14 and 16 are pending and examined below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement As of date of this action, IDS filled has been annotated and considered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “Acquirer” and “holder” in claim 1 and 2 “Holder” in claim 13 and 14 Estimation device include controller, acquirer and display, see at least fig 1 of PGPub of submitted specification. estimation device or controller include a computer, see at least [0067] of PGPub of submitted specification. Acquirer includes user input device e.g., mouse or touch sensor, see [0036] of PGPub of submitted specification. fig 1 of PGPub of submitted specification, shows 2B is a holder. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 9-13 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0061811 (“Iqbal”), and further in view of US 2023/0010518 (“Yamaguchi”). Regarding claim 1 (and similarly claim 13), Iqbal discloses an estimation system comprising (see at least fig 3-fig 5): an acquirer configured to acquire information on a holding target object to be held by a holder (see at least [0078], where “the process 800 is used to control an articulated robot that includes a manipulator such as a claw, hand, or tool. In at least one embodiment, at block 802, the machine-learning computer system identifies an object to be grasped by the robot. In at least one embodiment, the object is identified using an image captured by a camera.”; camera is interpreted as acquirer and claw/hand/tool is interpreted as holder. See also [0053]); a controller configured (see at least [0061], where “the camera allows a controller computer system to adjust the position of the gripper 204”; see also fig 8, block 808), based on the information on the holding target object in consideration of an acquisition point at which the information on the holding target object is acquired (see at least [0072], where “the camera captures an image 512 that includes the object 514 from the point of view of the gripper, allowing the control system to make fine adjustments that improve the precision and accuracy of the resulting grasp.”; see also [0076]); and a control device configured to cause a holder to hold the holding target object at the holding position (see at least fig 3, block 818). Iqbal does not disclose the following limitation: estimate a holding position at which the holder is caused to hold the holding target object. However, Yamaguchi discloses a system wherein estimate a holding position at which the holder is caused to hold the holding target object (see at least fig 6 and [0022], where holding position is determined based on camera image fig 3, block S8). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Iqbal to incorporate the teachings of Yamaguchi by including the above feature for increasing success rate of grasp by estimating grasping position on the object. Regarding claim 9, Iqbal further discloses a system wherein the controller is configured to acquire acquisition point information for specifying the acquisition point (see at least fig 7, where point of view is interpreted as acquisition point). Regarding claim 10, Iqbal further discloses a system wherein the acquisition point information is information for specifying a predefined acquisition point (see at least fig 7, fig 8 and [0080], where manipulator is moved to a pre-grasp position then obtain image. The pre-grasp position is acquisition point. The acquisition point is determined based on successful grasp). Regarding claim 11, Iqbal further discloses a system wherein the acquisition point information is information representing a direction in which the information on the holding target object is acquired, by roll, pitch, and yaw or by quaternions with respect to a position of the holding target object serving as an origin (see at least [0058] and [0061]). Regarding claim 12, Iqbal further discloses a system wherein the controller is configured to generate acquisition point information for specifying the acquisition point, based on the information on the holding target object (see at least fig 8, based on holding object image is captured). Regarding claim 16, Iqbal further discloses a control device configured to cause the holder to hold the holding target object at the holding position estimated by executing the estimation method according to claim 13 (see at least fig 8, block 818). Claim(s) 3-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0061811 (“Iqbal”), and in view of US 2023/0010518 (“Yamaguchi”), as applied to claim 1 above, and further in view of US 2022/0274255 (“Okawa”). Regarding claim 3, Iqbal in view Yamaguchi does not disclose claim 3. However, Okawa discloses a system wherein the controller is configured to input the information on the holding target object to an inference model, and estimate, as the holding position, an inferred result output from the inference model (see at least fig 13-15). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Iqbal in view Yamaguchi to incorporate the teachings of Okawa by including the above feature for reducing cost of robot control by creating a moving operation using a model. Regarding claim 4, Okawa further discloses a system wherein the controller is configured to inspect or correct an estimated result of the holding position obtained by the inference model, based on a result obtained by processing the information on the holding target object with a rule- based algorithm (see at least [0215]). Regarding claim 5, Okawa further discloses a system wherein the inference model includes a convolution layer configured to receive an input of the information on the holding target object, and a fully connected layer configured to process an output of the convolution layer and output an estimated result of the holding position, and the fully connected layer includes a layer that takes the acquisition point into consideration (see at least [0193] and [0366]). Claim(s) 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0061811 (“Iqbal”), and in view of US 2023/0010518 (“Yamaguchi”), as applied to claim 1 above, and further in view of US 2020/0164531 (“Wagner”). Regarding claim 6, Iqbal in view Yamaguchi does not disclose claim 6. However, Wagner discloses a system wherein the controller is configured to display a superimposition image in which an image representing an estimated result of the holding position is superimposed on an image representing the information on the holding target object (see at least [0040], where “FIG. 3 shows an image of a camera view from the perception unit 26, and the image may appear on the image display system 28 of FIG. 1 with superimposed images of an end effector seeking to grasp each object 40, 42, 44, 46, 48, 50, 52 and 54 in a bin 56, showing the location of each grasp. Candidate grasp locations 58 are indicated using a 3D model of the robot end effector placed in the location where the actual end effector would go to use as a grasp location as shown in FIG. 3.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Iqbal in view Yamaguchi to incorporate the teachings of Wagner by including the above feature for increasing grasp efficiency by visualizing the grasp location so that successful grasp is achieved faster without iteration. Regarding claim 7, Wagner further discloses a system wherein the controller is configured to transform the image representing the information on the holding target object into an image obtained if the information on the holding target object is acquired from a direction in which the holder holds the holding target object (see at least fig 3-6). Regarding claim 8, Wagner further discloses a system wherein the controller is configured to receive a user input for correcting the holding position based on the superimposition image (see at least [0041] and [0048]). Allowable Subject Matter Claim 2 is allowed. Response to Arguments Applicant’s arguments with respect to claims 1, 3-14 and 16 have been considered but are moot because the arguments do not apply to the new combination used in the current rejection that is due to the newly added claim amendments. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOHANA TANJU KHAYER whose telephone number is (408)918-7597. The examiner can normally be reached on Monday - Thursday, 7 am-5.30 pm, PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on 571-270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SOHANA TANJU KHAYER/Primary Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Oct 24, 2025
Non-Final Rejection — §103
Jan 22, 2026
Response Filed
Feb 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583107
TEMPORAL LOGIC FORMULA GENERATION DEVICE, TEMPORAL LOGIC FORMULA GENERATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12576520
TECHNIQUES FOR DEPLOYING TRAINED MACHINE LEARNING MODELS FOR ROBOT CONTROL
2y 5m to grant Granted Mar 17, 2026
Patent 12569999
CONFIGURING AND MANAGING FLEETS OF DYNAMIC MECHANICAL SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12569996
METHOD AND APPARATUS FOR AUTOMATICALLY GENERATING DRUM PLAY MOTION OF ROBOT
2y 5m to grant Granted Mar 10, 2026
Patent 12564123
METHOD AND SYSTEMS FOR USING SENSORS TO DETERMINE RELATIVE SEED OR PARTICLE SPEED
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+21.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 292 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month