Prosecution Insights
Last updated: April 19, 2026
Application No. 18/645,674

INFORMATION PROCESSING APPARATUS, GENERATION METHOD, AND COMPUTER PROGRAM PRODUCT

Non-Final OA §112
Filed
Apr 25, 2024
Examiner
SOFRONIOU, MICHAEL MARIO
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Kabushiki Kaisha Toshiba
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
37.8%
-2.2% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Typographic Conventions Throughout this office action, shorthand notation for referencing locations of elements in documents are utilized. The following is a brief summary of the shorthand utilized: Sec. – is used to denote an associated section with a header in non-patent literature ¶ – is used to denote the number and location of a paragraph col. – is used to denote a column number ln. – is used to denote a line; if a line number is not demarcated in a document, the line number will be assumed to start at 1 for each paragraph. Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/25/2024 & 01/26/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The present invention relates to estimating egomotion by depth and motion estimation models, optimized using loss functions to minimize differences in depth, correspondence, and brightness from training data. Adding sufficient detail to the title relating to the field of endeavor in camera motion estimation and/or the innovative concept would properly reflect the purpose or context of the present invention. The disclosure is objected to because of the following informalities: ¶0008 of the Specification recites the following: “an embodiment model not only the motion of stationary part such as background in an image but also the motion…” It appears a verb may be missing between only and the in the underlined portion of the passage. The examiner believes this passage was intended to recite something similar to “an embodiment model not only estimates the motion of stationary parts such as background in an image, but also the motion….” Appropriate correction is required. Claim Objections Claims 1, 7 & 8 are objected to because of the following informalities: Claim 1 recites “generate the first estimation model and the second estimation model represented by the updated parameters.” The use of the verb “generate” does not appear appropriate given the context of this claim language. The examiner a more appropriate term such as “update” or “revise” would be more accurate in this scenario, as the first and second estimation models do not appear to be generated again, but rather updated with the updated parameters to optimize the loss functions. Claim 7 recites “generating the first estimation model and the second estimation model represented by the updated parameters.” Similar to the analysis of claim 1, the use of the verb “generating” does not appear appropriate given the context of the claim language. A term such as “updating” or “revising” would be more accurate in this scenario. Claim 8 recites “generating the first estimation model and the second estimation model represented by the updated parameters.” Similar to the analysis of claims 1 & 7, the use of the verb “generating” does not appear appropriate given the context of the claim language. A term such as “updating” or “revising” would be more accurate in this scenario. Appropriate correction is required. Claims 2-6 are objected to and cannot be indicated as allowable until the objection of claims 1 are overcome. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 & 4 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 3, the applicant recites the phrase “indicating a difference in depth between two pixels each having a larger difference in depth in depth training data that is training data concerning depth, than a designated value.” It is unclear from the specification [¶0063-71; Eq. 3-7] and claim language whether the described difference in depth is a comparison of the depth difference between two pixels being further compared to a denoted a designated value obtained via the training data, or an individual comparison of depth of each pixel to a designated value. Further clarification must be provided to clearly explain the relationship between depth comparisons. As for claim 4, the applicant, similarly recites: “indicating a difference in depth between two pixels each having a larger difference in depth in depth training data than a designated value.” It is unclear from the specification [¶0063-71; Eq. 3-7] and claim language whether the described difference in depth is a comparison of the depth difference between two pixels being further compared to a denoted a designated value obtained via the training data, or an individual comparison of depth of each pixel to a designated value. Further clarification must be provided to clearly explain the relationship between depth comparisons. Allowable Subject Matter Claims 1- 8 would be allowable if rewritten to overcome the claim objections and 35 U.S.C. § 112(b) rejections set forth in this Office Action. The following is a statement of reason for the indication of allowable subject matter: Regarding claim 1, the primary reason for indication of allowable subject matter is that the prior art fails to teach or reasonably suggest an information processing apparatus that calculates correspondence information using a first and second depth information, motion information calculated via the first and second depth information, and camera parameters. The closest prior art analogous to the field of endeavor of the present application, Natroshvili et al (US 2019/0122373 A1), disclose an apparatus for estimating depth and camera motion from a scene captured by a plurality of cameras. More specifically, Natroshvili et al teach a depth estimation model that estimates depth from a first and second image, a motion estimation model, calculating a correspondence (optical flow) via estimation models that are optimized using a combined loss function for pixel depth, velocity, class, segmentation, intensity, and optical flow to optimize the estimation models. What Natroshvili fails to teach is particularly estimating motion utilizing depth information from the first and second image, or calculating correspondence information from a first and second depth information, motion information, and camera parameters of the imaging device. Chen et al (“Self-supervised Learning with Geometric Constrains in Monocular Video”, 2019, IEEE/CVF), on the other hand, further teaches a displacement field that represents the real image motion using depth data. Chen et al, however, still fails to teach or reasonably suggest obtaining correspondence information from all the aforementioned inputs outlined in the claim language. The subject matter disclosed as part of the correspondence calculation, as a whole, is neither anticipated by nor made obvious by the prior art of record. Therefore, claim 1, and it’s associated dependent claims 2-6 are considered allowable subject matter (assuming they are rewritten to overcome the claim objections and 35 U.S.C. § 112(b) rejections set forth in this Office Action). Regarding claims 7 & 8, an identical analysis to the apparatus of claim can be applied to the associated method of claim 7 and non-transitory computer readable medium of claim 8. Therefore, claims 7 & 8 are considered allowable subject matter (assuming they are rewritten to overcome the claim objections set forth in this Office Action). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Heise; Phillip (DE 102017207789 A1) discloses a method for performing visual odometry by estimating depth information between a first and second image optimized via a cost function of pixel intensity. Liu et al (US 2023/0410338 A1) disclose a system for optimizing a depth estimation model utilizing a loss function for minimizing the difference in depth. Mai et al (“Feature-aided Bundle Adjustment Learning Framework for Self-supervised Monocular Visual Odometry”, 2021, IEEE/RSJ) disclose a self-supervised method for visual odometry by estimating depth maps, camera poses, and dense feature maps used to optimize photometric, geometric, and feature-metric loss functions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael M. Sofroniou whose telephone number is (571)272-0287. The examiner can normally be reached M-F: 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M. Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL SOFRONIOU/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month