Prosecution Insights
Last updated: April 19, 2026
Application No. 18/850,271

PARALLAX INFORMATION GENERATION DEVICE, PARALLAX INFORMATION GENERATION METHOD, AND PARALLAX INFORMATION GENERATION PROGRAM

Final Rejection §103
Filed
Sep 24, 2024
Examiner
CHIO, TAT CHI
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Panasonic Intellectual Property Management Co., Ltd.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
610 granted / 836 resolved
+15.0% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
49 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 836 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 9/18/2025 have been fully considered but they are not persuasive. Applicant argues that the combination of Ishigami and Yokota does not explicitly teach identifies a dynamic area in an image capturing scene, by comparing the plurality of images between frames and determines, as the process target area, an area including a part or the entirety of the dynamic area and a part of a static area that is an area other the dynamic area. In response, the examiner respectfully disagrees. Ishigami teaches subsequent to step 110, processings of steps 115 to 150 are repeated by the number of times equal to the number of pixels of the disparity map. At step 115 in each cycle of the repetition, a correction-target pixel and a plurality of surrounding pixels are newly set for the cycle. As shown in FIG. 4, a block area 40 is set, which is composed of 9-row.times.9-column dots centering on a correction-target pixel 41. In the block, the plurality of surrounding pixels correspond to the pixels that have remained after removal of the correction-target pixel 41 from all the pixels in the block area 40. It should be noted that the size and shape of the block area 40 may be appropriately changed. At the subsequent steps 120, 125, 130 and 135, various weights are set for the correction-target pixel 41 and each of the surrounding pixels. The set weights are each used as an amount indicating a degree of contribution to correction in correcting the disparity of the correction-target pixel 41 using a weighted average at step 145 described later. First, at step 120, a first weight Wx is set for the correction-target pixel 41 and each of the surrounding pixels. The weight W.sub.X is set according to a distance, on the image coordinate, from a pixel targeted to calculation of a weight (the correction-target pixel 41 or the surrounding pixel; hereinafter referred to as "weight-calculation-target pixel") to the correction-target pixel 41. As indicated by an arrow 42 in FIG. 5, a position more distanced from the correction-target pixel 41 in the block area 40 in the right-camera image 21, makes longer the distance, on the image coordinate, from a weight-calculation-target pixel at the position to the correction-target pixel 41. Specifically, when an image coordinate value of the correction-target pixel 41 is (x, y) and an image coordinate value of a weight-calculation-target pixel is (x',y') on an image coordinate (X, Y) as shown in FIG. 5, weight setting is expressed by: W X ( x , y , x , x ' , y ' ) exp ( - ( x - x ' ) 2 + ( y - y ' ) 2 2 .sigma. x 2 ) Math . 1 ##EQU00001## At the subsequent step 125, a second weight W.sub.I is set for the correction-target pixel 41 and each of the surrounding pixels. The weight W.sub.I is set according to a difference in a luminance value between a weight-calculation-target pixel and the correction-target pixel 41 (difference between luminance values in the right-camera image 21). As shown in FIG. 7, the correction-target pixel 41 includes a part of a human body as a subject. As shown in the figure, in the block area 40, the luminance of an area 43 containing a human body is obviously different from the luminance in an area 44 containing the background. In such a case, making use of the difference in luminance, the disparity of the background that is not so relevant to the disparity of the correction-target pixel 41 is permitted to have a low degree of contribution in performing disparity correction calculation for the correction-target pixel 41. Specifically, when the luminance value of the correction-target pixel 41 (in the right-camera image 21) is represented by I.sub.Xy, and the luminance value of a weight-calculation-target pixel (in the right-camera image 21) is represented by I.sub.Xy, the calculation is as follows: W I ( I xy , I x ' y ' ) = exp ( - ( I xy - I x ' y ' ) 2 2 .sigma. I 2 ) Math . 2 ##EQU00002## Specifically, as the difference between the luminance value of the correction-target pixel 41 and the luminance value of a weight-calculation-target pixel becomes larger, the weight W.sub.I of the weight-calculation-target pixel is permitted to be lighter. It should be noted that .sigma..sub.I is a preset constant. At the subsequent step 130, a third weight W.sub.Z is set for the correction-target pixel 41 and each of the surrounding pixels. The weight W.sub.Z is set according to a difference in position in a depth direction, between a weight-calculation-target pixel and the correction-target pixel 41 when converted to a three-dimensional coordinate. As shown in FIG. 7, the correction-target pixel 41 includes a part of a human body as a subject. As shown in the figure, the block area 40 may include a subject 45 (e.g., white line on a road surface) which has a luminance close to that of the human body, but of which the distances from the cameras 1 and 2 thereto are greatly different from each other. In such a case, making use of the difference in distance from the cameras 1 and 2 (i.e. the positions in a depth direction), the disparity of the subject 45 that is not so relevant to the disparity of the correction-target pixel 41 is permitted to have a low degree of contribution in performing disparity correction calculation for the correction-target pixel 41. Specifically, when the position in a depth direction of the correction-target pixel 41 is represented by Z.sub.Xy, and the position in a depth direction of a weight-calculation-target pixel is represented by Z.sub.Xy, the calculation is as follows: X Z ( Z xy , Z x ' y ' ) = exp ( - ( Z xy - Z x ' y ' ) 2 s .sigma. Z 2 ) Math . 3 ##EQU00003## Specifically, as the difference between a position (whose value becomes larger as distanced from a camera) in a depth direction of the correction-target pixel 41 and the position in a depth direction of a weight-calculation-target pixel becomes larger, the weight W.sub.Z of the weight-calculation-target pixel is permitted to be lighter. It should be noted that .sigma..sub.Z is a preset constant. [0040] – [0048]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-6 and 9-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ishigami et al. (US 2015/0228057 A1) in view of Yokota et al. (US 2018/0336701 A1). Consider claim 1, Ishigami teaches a parallax information generation device, comprising: an imaging unit configured to capture a plurality of images with different viewpoints ([0026], [0031] – [0032], Fig. 1); a process target area determination unit configured to set a base image and a reference image out of the plurality of images captured by the imaging unit ([0033] – [0039]), and determine a process target area to be subjected to a predetermined image processing in the base image and the reference image ([0040] – [0048]); wherein the process target area determination unit identifies a dynamic area in an image capturing scene, by comparing the plurality of images between frames ([0040] – [0048]), and determines, as the process target area, an area including a part or the entirety of the dynamic area and a part of a static area that is an area other than the dynamic area ([0040] – [0048]). However, Ishigami does not explicitly teach an image processing unit configured to perform the predetermined image processing to the process target area of each of the base image and the reference image to generate parallax information. Yokota teaches an image processing unit configured to perform the predetermined image processing to the process target area of each of the base image and the reference image to generate parallax information ([0079], [0093] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating parallax information because such incorporation would help perform object recognition based on parallax image. [0067]. Consider claim 2, Yokota teaches the predetermined image processing is a stereo matching process ([0079], [0093] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating parallax information because such incorporation would help perform object recognition based on parallax image. [0067]. Consider claim 3, Ishigami teaches the process target area determination unit determines the process target area so that the number of pixels in the process target area satisfies a predetermined condition ([0040] – [0048]). Consider claim 4, Ishigami teaches the predetermined condition is the number of pixels in the process target area being constant between frames ([0040] – [0048]). Consider claim 5, Ishigami teaches the process target area determination unit sets, in the static area, an area to be preferentially incorporated into the process target area ([0040] – [0048]). Consider claim 6, Yokota teaches the image processing unit comprises a corresponding point search unit configured to identify at least two corresponding pixels in the reference image, which are pixels resembling to the pixels in the base image ([0079], [0085] – [0089], [0093] – [0104]), and store the corresponding relationship of the identified pixels as correspondence information ([0079], [0085] – [0089], [0093] – [0104]); and the process target area determination unit identifies a pixel position corresponding to a pixel position in the dynamic area by referring to the correspondence information, and incorporates the identified pixel position in the process target area ([0079], [0085] – [0089], [0093] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating parallax information because such incorporation would help perform object recognition based on parallax image. [0067]. Consider claim 9, Yokota teaches the image processing unit comprises: a reliability information generator configured to generate reliability information indicating reliability of a correspondence relationship between the base image and the reference image ([0079], [0085] – [0089], [0093] – [0104], [0161]); and generates parallax information for an image area for which the reliability information indicates higher reliability than a predetermined value ([0079], [0085] – [0089], [0093] – [0104], [0161]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating parallax information because such incorporation would help perform object recognition based on parallax image. [0067]. Consider claim 10, Yokota teaches the image processing unit comprises: a distance information generator configured to generate distance information of a target, by using the parallax information ([0109] – [0113]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating parallax information because such incorporation would help perform object recognition based on parallax image. [0067]. Consider claim 11, claim 11 recites the method implemented by the device recited in claim 1. Thus, it is rejected for the same reasons. Consider claim 12, claim 12 recites the method implemented by the device recited in claim 2. Thus, it is rejected for the same reasons. Consider claim 13, the combination of Ishigami and Yokota teaches a non-transitory storage medium storing a program ([0027] of Ishigami) configured to cause a computer to execute the parallax information generation method of claim 11 (see rejection of claim 11). Allowable Subject Matter Claims 7-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAT C CHIO/ Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Sep 24, 2024
Response after Non-Final Action
Jul 08, 2025
Non-Final Rejection — §103
Sep 18, 2025
Response Filed
Dec 12, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587653
Spatial Layer Rate Allocation
2y 5m to grant Granted Mar 24, 2026
Patent 12549764
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12549845
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
2y 5m to grant Granted Feb 10, 2026
Patent 12546657
METHODS AND SYSTEMS FOR REMOTE MONITORING OF ELECTRICAL EQUIPMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12549710
MULTIPLE HYPOTHESIS PREDICTION WITH TEMPLATE MATCHING IN VIDEO CODING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
90%
With Interview (+16.6%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 836 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month