Office Action Predictor
Last updated: April 15, 2026
Application No. 18/156,145

Method and an Apparatus for Improving Depth Calculation

Non-Final OA §102§112
Filed
Jan 18, 2023
Examiner
PATEL, JAYESH A
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Inuitive LTD.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
86%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
739 granted / 887 resolved
+21.3% vs TC avg
Minimal +3% lift
Without
With
+2.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
920
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
25.0%
-15.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 887 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/05/2025 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites at lines 1-22 “ A computational platform for use in a depth calculation process based on information comprised in an image captured by two image capturing devices, wherein said computational platform enables distinguishing between areas included in said image captured by one of the two image capturing devices that comprise details that are implementable by a matching algorithm, and areas that do not comprise such details, wherein said computational platform comprises: at least one processor, configured to select a window comprised in said captured image for matching a corresponding part included in said image captured by the other image capturing device from among the two image capturing devices; calculate a metric based on the images captured by the two image capturing devices as opposed to areas that do not include such information by removing outliers' values from said selected matching window, thereby allowing to distinguish between areas included in the captured image that comprise details that are implementable by a matching algorithm and areas that do not have such details; and use the calculated metric to evaluate level of confidence for pixels being matched in the captured image and the image captured by the other image capturing device from among the two image capturing devices.”, the above recital of “said image captured”, “the image captured”, “said captured image” and “the captured image”, “the images captured” makes the claim indefinite and also raises antecedent issues. First of all as recited at lines 1-3, “an image is captured by two image capturing devices” (i.e a single image is captured by the two (both) image capturing devices). Second of all in the claim “the image captured” and “the captured image” is recited as captured by one of the two image capturing devices and the other image capturing device of the two image capturing devices. The recitals render the claim indefinite. It is unclear as to whether the image captured is the same as the captured image which is captured by two (both) of the devices (as recited at lines 1-3) or first device (one of the two) or the second device (the other one of the two)?. Amendments/clarification are required. Third of all Claim 1 recites at line 1-2 “an image captured by two image capturing devices. Claim 1 further recites at lines 21-22 “the captured image and the image captured by the other image capturing device from among the two image capturing devices”. The above two recitals render the claim indefinite. It is unclear as to whether “the image” is captured by two image capturing devices or “the other image capturing device”? and also whether “the captured image” and “the image captured” are same or different image?. Amendments/clarification are required. Claims 2-5 and 11 depends directly or indirectly on claim 1, therefore they are rejected. Claim 1 recites the limitation "the images captured by the two image capturing devices" in lines 15-16. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites at lines 2- “an image” as opposed to the images as recited in claim 1 at lines 15-16 recites “the images”. Therefore, it raises antecedent issues. Claims 2-5 and 11 depends from claim 1, therefore they are rejected. Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites at lines 1-20 “ A method for use in a depth calculation process based on information comprised in an image captured by two image capturing devices, that enables distinguishing between areas included in said image captured by one of the two image capturing devices, that comprise details that are implementable by a matching algorithm, and areas that do not comprise such details, wherein said method comprises the steps of: select a window comprised in the captured image for matching a corresponding part included in an image captured by another image capturing device from among the two image capturing devices; calculate a metric based on the images captured by the two image capturing devices as opposed to areas that do not include such information by removing outliers' values from each selected matching window, thereby allowing to distinguish between areas included in the captured image that comprise details that are implementable by a matching algorithm and areas that do not have such details and use the calculated metric to evaluate level of confidence for pixels being matched in the captured image and the image captured by the other image capturing device from among the two image capturing devices.”, the above recital of “said image captured”, “said captured image”, “the images captured” makes the claim indefinite and also raises antecedent issues. First of all as recited at lines 1-3, “an image is captured by two image capturing devices” (i.e a single image is captured by the two (both) image capturing devices). Second of all in the claim “said image captured”, “the image captured”, “said captured image” and “the captured image”, “the images captured” is recited as captured by one of the two image capturing devices and the other image capturing device of the two image capturing devices. The recitals render the claim indefinite. It is unclear as to whether the image captured is the same as the captured image which is captured by two (both) of the devices (as recited at lines 1-3) or first device (one of the two) or the second device (the other one of the two)?. Second of all Claim 6 recites at line 2 “an image” and also recites at line 8 “an image”. Claim 6 further recites “the captured image” at line 16. It is unclear as to which of the above two recitals at lines 2 and 8, the captured image recited at line 16 refers too?. Amendments/clarification are required. Third of all Claim 6 recites at line 1-2 “an image captured by two image capturing devices. Claim 6 further recites at lines 19-20 “the captured image and the image captured by the other image capturing device from among the two image capturing devices”. The above two recitals render the claim indefinite. It is unclear as to whether “the image” is captured by two image capturing devices or “the other image capturing device”? and also whether “the captured image” and “the image captured” are same or different image?. Amendments/clarification are required. Claims 7-9 depends directly or indirectly on claim 6, therefore they are rejected. Claim 6 recites the limitation "the images captured by the two image capturing devices" in lines 13-14. There is insufficient antecedent basis for this limitation in the claim. Claim 6 recites at lines 2- “an image” as opposed to the images as recited in claim 6 at lines 13-14 recites “the images”. Therefore, it raises antecedent issues. Claims 7-9 depends from claim 6, therefore they are rejected. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites at lines 1-19 “A method for use in a depth calculation process based on information comprised in an image captured by two image capturing devices, that enables distinguishing between areas included in said image captured by one of the two image capturing devices, said captured image comprises details that are implementable by a matching algorithm and areas that do have such details, wherein said method comprises the steps of: providing information associated with said captured image; for one or more pixels comprised in said captured image, selecting a window for matching a corresponding part included in an image captured by the other image capturing device from among the two image capturing devices; based on the selected window, calculating a metric (i,j); generating an information map is generated by setting the value of InfoMap (i,j)=0 if the value of metric(i,j) is greater than a predefined threshold, else InfoMap (i,j)=1; calculating depth(i,j) based on corresponding images captured by the image capturing devices, and for all i and j values, where InfoMap (ij) is equal to zero, setting the value of depth(i,j) to an unknown value; evaluating level of confidence for pixels being matched in the captured image and the image captured by the other image capturing device from among the two image capturing devices. ”, the above recital of “said image captured”, “said captured image”, makes the claim indefinite and also raises antecedent issues. First of all as recited at lines 1-3, “an image is captured by two image capturing devices” (i.e a single image is captured by the two (both) image capturing devices). Further in the claim “said image captured”, “the image captured”, “said captured image” and “the captured image”, is recited as captured by one of the two image capturing devices and the other image capturing device of the two image capturing devices. The recitals render the claim indefinite. It is unclear as to whether the image captured is the same as the captured image which is captured by two (both) of the devices (as recited at lines 1-3) or first device (one of the two) or the second device (the other one of the two)?. Second of all Claim 10 recites at line 2 “an image” and also recites at line 9 “an image”. Claim 10 further recites “the captured image” and “the image captured” at lines 17-18. It is unclear as to which of the above two recitals at lines 2 and 8, the captured image recited at lines 17-18 refers too?. Amendments/clarification are required. Third of all Claim 10 recites at line 1-2 “an image captured by two image capturing devices. Claim 10 further recites at lines 17-18 “the captured image and the image captured by the other image capturing device from among the two image capturing devices”. The above two recitals render the claim indefinite. It is unclear as to whether “the image” is captured by two image capturing devices or “the other image capturing device”? and also whether “the captured image” and “the image captured” are same or different image?. Amendments/clarification are required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 6-9 and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Valentin et al. (US10554957) hereafter Valentin. 1. Regarding claim 1 as best understood by the examiner, Valentin discloses a computational platform (figs 1-4 shows a computational platform) for use in a depth calculation process based on information comprised in an image captured by two image capturing devices, wherein said computational platform enables distinguishing between areas included in said image captured by one of the two image capturing devices that comprise details that are implementable by a matching algorithm, and areas that do not comprise such details, wherein said computational platform comprises: at least one processor (fig 1 shows a processor 104 and image preprocessor 118), configured to select a window comprised in said captured image for matching a corresponding part included in said image captured by the other image capturing device from among the two image capturing devices (col 5 lines 57 through col 6 lines 40 discloses at least one processor configured to select a window comprised in said captured image for matching a corresponding part included in said image captured by the other image capturing device from among the two image capturing devices 108 and 112); calculate a metric based on the selected window, wherein said metric is configured to distinguish between areas that comprise information sufficient to allow robust matching procedure for depth calculation from the images captured by the two image capturing devices (fig 2, col 6 lines 21-67 discloses calculate a metric (the hamming distance) based on the selected window, wherein said metric is configured to distinguish between areas that comprise information sufficient to allow robust matching procedure for depth calculation from the images captured by the two image capturing devices (i.e stereo matching)) as opposed to areas that do not include such information by removing outliers' values from said selected matching window (on the window size W), thereby allowing to distinguish between areas included in the captured image that comprise details that are implementable by a matching algorithm and areas that do not have such details (col 6 lines 21-40 and col 7 lines 1-19 discloses areas that do not include such information by removing outliers' values from said selected matching window (on the window size W), thereby allowing to distinguish between areas included in the captured image that comprise details (a disparity map) that are implementable by a matching algorithm (i.e disparity matches) and areas that do not have such details, examiner notes that the specifics of “information”, “details” and “removing outliers” are not required by the current claim); and use the calculated metric to evaluate level of confidence for pixels being matched in the captured image and the image captured by the other image capturing device from among the two image capturing devices (col 6 lines 21- col 7 lines 19 discloses the hamming distances and the matching module samples the disparities per pixel and their matching cost (i.e to evaluate the confidence for the pixels being matched in the images (110, 112) captured meeting the claim limitations)). 2. Regarding claim 2 as best understood by the examiner, Valentine discloses the computational platform of claim 1, wherein said metric is based on a physical model of a signal representing the captured image (fig 2 shows, wherein said metric is based on a physical model (i.e the hamming distance) of a signal representing the captured image). 3. Regarding claim 3, as best understood by the examiner, Valentine discloses the computational platform of claim1, wherein said at least one processor is further configured to remove outliers' values from the selected window based on the evaluated level of confidence for pixels associated with said outliers' values (col 7 lines 8-19 discloses disparities associated with the large hamming distances (i.e using the calculated metric i.e removing outliers values to evaluate the level of confidence) to evaluate the confidence for the pixels being matched in the images (110, 112) captured meeting the claim limitations). 4. Regarding claim 4 as best understood by the examiner, Valentine discloses the computational platform of claim 3, wherein said at least one processor is further configured to apply a normalization function to compensate for said metric's dependency on a number of pixels comprised in respective selected(col 6 lines 41-42 discloses the stereo matching optimization (i.e the normalization function) meeting the above claim limitations). 5. Claim 6 as best understood by the examiner is a corresponding method claim of claim 1. See the corresponding explanation of claim 1. 6. Claim 7 as best understood by the examiner is a corresponding method claim of claim 2. See the corresponding explanation of claim 2. 7. Claim 8 as best understood by the examiner is a corresponding method claim of claim 3. See the corresponding explanation of claim 3. 8. Claim 9 as best understood by the examiner is a corresponding method claim of claim 4. See the corresponding explanation of claim 4. 9. Regarding claim 11 as best understood by the examiner, Valentin discloses an image capturing sensor (fig 1 shows a user equipment 102 (i.e an image sensor)) comprising a computational platform as claimed in claim 1 (see the corresponding explanation of claim 1). Examiner's Note: Examiner has cited figures, and paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested for the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Examiner has also cited references in PTO892 but not relied on, which are relevant and pertinent to the applicant’s disclosure, and may also be reading (anticipatory/obvious) on the claims and claimed limitations. Applicant is advised to consider the references in preparing the response/amendments in-order to expedite the prosecution. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAYESH PATEL whose telephone number is (571)270-1227. The examiner can normally be reached IFW Mon-FRI. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAYESH PATEL Primary Examiner Art Unit 2677 /JAYESH A PATEL/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jan 18, 2023
Application Filed
Mar 13, 2025
Non-Final Rejection — §102, §112
Jun 16, 2025
Response Filed
Oct 01, 2025
Final Rejection — §102, §112
Nov 24, 2025
Response after Non-Final Action
Dec 05, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §102, §112
Apr 02, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597170
METHOD AND APPARATUS FOR IMMERSIVE VIDEO ENCODING AND DECODING, AND METHOD FOR TRANSMITTING A BITSTREAM GENERATED BY THE IMMERSIVE VIDEO ENCODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579770
DETECTION SYSTEM, DETECTION METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561949
CONDITIONAL PROCEDURAL MODEL GENERATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555346
Automatic Working System, Automatic Walking Device and Control Method Therefor, and Computer-Readable Storage Medium
2y 5m to grant Granted Feb 17, 2026
Patent 12536636
METHOD AND SYSTEM FOR EVALUATING QUALITY OF A DOCUMENT
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
86%
With Interview (+2.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 887 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month