Prosecution Insights
Last updated: April 19, 2026
Application No. 18/852,607

TRAVEL ENVIRONMENT DECISION APPARATUS, VEHICLE, TRAVEL ENVIRONMENT DECISION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Final Rejection §103
Filed
Sep 30, 2024
Examiner
ZEWEDE, ASTEWAYE GETTU
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
NEC Corporation
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
36 granted / 45 resolved
+22.0% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
18 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
0.7%
-39.3% vs TC avg
§103
67.0%
+27.0% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the amendment filed on 12/11/2025. Claims 2 and 7 have been cancelled, claim 16 has been newly added. Thus, claims 1, 3-6, and 8-16 are pending for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/30/2024 filed in accordance with the provisions of 37 CFR 1.97. Accordingly, it is being considered by the examiner. Response Amendments Applicant’s Amendment filed on September, 30, 2025 has been entered and made of record. Claims 1, 3-5, 8-9, and 11-15 have been amended, Claim 16 has been added. Thus, claims 1, 3-6, and 8-16 are pending for examination. Response to Arguments Applicant’s arguments, see Remarks, Pages 8-9, filed December 11, 2025, with respect to the rejection(s) of claim(s) 1, 14, and 15 have been fully considered and are persuasive with the respect to the prior ground of rejection. Accordingly, the rejection is withdrawn. However, upon further consideration of the amended subject matter, a new ground of rejection is made in view of Zou et al. (US-2008/0159623-A1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness Claims 1, 3, 12-14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Wedajo et al. (US-20140184798-A1) hereinafter “Wedajo” in view of Nishimura et al (US-11458979-B2) hereinafter “Nishimura” further in view of Zou et al. (US-2008/0159623-A1) hereinafter “Zou”. Regarding Claim 1 Wedajo-Nishimura-Zou Wedajo disclose (currently amended): A travel environment decision apparatus ( Wedajo, [0001] “The vehicle comprises a tunnel detection device designed…” ) comprising: . . . acquire a photographed image captured by a photographing apparatus installed in a vehicle; (Wedajo, [0050] “In the upper part of a front screen of the vehicle 10 a camera 16 is arranged, which captures … image areas … of the surroundings of the vehicle 10….”) perform analysis processing with respect to the photographed image, (Wedajo, [0052] “ The feature detection device 46 is capable of detecting features or edges in the image areas .. captured by the camera 16. These features are characterized by abrupt changes in brightness within the image areas …”) Wedajo does not explicitly disclose a memory configured to store instructions; a processor configured to execute the instructions to: determine a first region being a region associated with a sky in the photographed image; determine a reference region in which a predetermined criterion is satisfied in the photographed image: and decide whether the vehicle is present within a structure, based on a ratio occupied by the first region in the reference region. and However, in the same field of endeavor Nishimura discloses more explicitly the following: a memory (Fig. 2 “Memory (RAM) 202, “storage device 204””) configured to store instructions; , (Nishimura, Col, 7, lines 43-45 “…a Random Access Memory (RAM) 202, a Read Only Memory (ROM) 203, a storage device 204,,,”) and a processor configured to execute the instructions to: (Nishimura Col, 7, lines 42-51 “A computer 200 includes, for example, a Central Processing Unit (CPU) 201 ….The CPU 201 is a computational device that reads out programs, data, or the like stored in the ROM 203, the storage device 204, or the like on the RAM 202 and executes processing to realize functions of the computer 200.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the system of Wedajo with the teachings of Nishimura in order to include a memory configured to store instructions and a processor configured to execute the instructions, as suggested by Nishimura. Such modification would have been an obvious design choice because incorporating a memory and a processor for executing a stored instruction is a well-known and standard implementation, would have predictably improved the system’s reliability “to avoid an erroneous tunnel detection” (Wedajo, [0056]) Wedajo-Nishimura-Zou do not disclose determine a first region being a region associated with a sky in the photographed image; determine a reference region in which a predetermined criterion is satisfied in the photographed image: and decide whether the vehicle is present within a structure, based on a ratio occupied by the first region in the reference region. However, in the same field of endeavor Zou discloses more explicitly the following: determine a first region being a region associated with a sky in the photographed image; (Zou, [0013] “predefining an air region and a terrestrial region for an image captured by a camera apparatus;” see also ¶[0041], [0061]) determine a reference region in which a predetermined criterion is satisfied in the photographed image: (Zou [0046] “In step S130, the following formulas (2) and (3) are used to respectively calculate the percentage Percentage. sub.daytime of pixels for highlight zones (with a lightness value greater than or equal to T1) and the percentage Percentage.sub.nightime of pixels for lowlight zones (with a lightness value less than or equal to T2) in the air region. Percentage.sub.daytime=h.sub.f(f'(x,y))/h.sub.f(f(x,y))T1.ltoreq.f'(x,y)- .ltoreq.255 (2) Percentage.sub.nighttime=h.sub.f(f'(x,y))/h.sub.f(f(x,y)) 0.ltoreq.(x,y).ltoreq.T2 (3)” [0047] “In formulas 2 and 3, f'(x,y) and f(x,y) represent lightness values for pixels with coordinates x and y, T1 represents the lower limit lightness value of the highlight zones, T2 represents the upper limit lightness value of the lowlight zones, T1 is larger than T2, and T1, T2 can be set according to the actual requirements.” [0048] “In step S140, it is judged whether Percentage.sub.daytime of the air region is larger than 30%. In step S150, when it is judged that Percentage.sub.daytime of the air region is larger than 30%, it is determined that the environment (i.e., the environment picked up by the camera apparatus) in which the object vehicle travels in the image f is a daytime lighting condition.” See also [0062]-[0065] The reference region is the region subjected to ratio-based threshold evaluation ( air region or terrestrial region depending on criterion. Thus, Reference region= region in which threshold ratio criterion is satisfied. The criterion is explicitly predefined (30%, 70%, 80%). and decide whether the vehicle is present within a structure, based on a ratio occupied by the first region in the reference region. (Zou, [0062] “The single-frame determining unit 108 further comprises a judging module 1082 and a determining module 1084. The judging module 1082 judges whether a ratio of the number of pixels with their lightness values larger than a first lightness value (T1) in the air region of the input image to the number of all pixels of the air region in the input image is greater than a first ratio such as 30%. When the judgment result is Yes, the determining module 1084 determines the photo environment of the camera apparatus as the daytime lighting condition.” See also ¶[0064]-[0065] uses ratio in terrestrial region if air region is obstructed. i.e., the system determines environmental conditions based on proportional (ratio) of sky-related region pixels. When air region is obstructed by building/mountains, percentage daytime becomes low thus Decision changes accordingly. Thus, environmental determination is made based on how much sky region is present (ratio of air pixels). As disclosed in Zou ¶ [0049],[0054], when the sky(air) region is obstructed by building or mountains, the proportion occupancy of highlight pixels changes, and the environmental determination corresponding changes. Accordingly, Zou determines environmental condition based on proportional occupancy of sky-region pixels. When the sky region is substantially reduced due to obstruction, the ratio-based evaluation reflects the presence of an obstructing structure. Thus, Zou inherently determines whether the vehicle is within an obstructed structure based on the ratio occupied by the sky-related region in the evaluated reference region. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Wedajo-Nishimura-Zou in view of Zou to provide a system configured to determine a first region associated with the sky in the photographed image, determine a reference region in which a predetermined criterion is satisfied in the photographed image, and decide whether the vehicle is present within a structure, based on a ratio occupied by the first region in the reference region.” as suggested by Zou. One of ordinary skill in the art would have been motivated to incorporate the ratio-based sky-region evaluation of Zou into Wedajo-Nishimura-Zou in order to improve environment recognition accuracy, particularly in situations where the sky (air) region is partially or fully obstructed by a background structure (e.g., buildings, bridges, or tunnel), thereby enabling reliable determination of environmental conditions based on proportional occupancy of sky-region pixels. (Zou,[0011]) Note: The motivation that was utilized in the rejection of claim 1, applies equally as well to claims 3, 12-14, and 15. Regarding Claim 3 Wedajo-Nishimura-Zou Wedajo-Nishimura-Zou -Zou (currently amended) The travel environment decision apparatus according to claim 1 the reference region is a region above a position associated with a road in the photographed image. (Wedajo, [0078] “The height of the image area 20 in FIG. 15 corresponds to the height of an object of 2 m, which is positioned at a distance of 30 m in front of the camera 16. With regard to the height above the road surface 42 the image area 20 is centered with reference to a point which is at a height D of 4 m above the road surface 42, wherein this height D of 4 m is viewed at a distance of 30 m from the camera 16.”See also ¶[0079], which further describes that the center of the image area 20 is positioned at a height D above the road surface 42, reinforcing that the reference region is defined above a position associated with the road) Regarding Claim 12 Wedajo-Nishimura-Zou Wedajo-Nishimura-Zou -Zou disclose A vehicle comprising: the travel environment decision apparatus according to claim 1, and the photographing apparatus that is installed in the vehicle to capture the photographed image. (Wedajo, [0050] “In the upper part of a front screen of the vehicle 10 a camera 16 is arranged, which captures … image areas … of the surroundings of the vehicle 10….”) Regarding Claim 13 Wedajo-Nishimura-Zou Wedajo-Nishimura-Zou -Zou disclose The vehicle according to claim 12, wherein further is configured to further execute the instructions to: (Nishimura, Col, 7, lines 48-51 “…a computational device that reads out programs, data, …like on the RAM 202 and executes processing…) control the vehicle. (Nishimura, Col, 5, lines 66-67, Col, 6, lines 1-2 “The on-vehicle device 110 is an information apparatus, …such as an on-vehicle Electronic Control Unit (ECU), which is mounted on the vehicle 10.”) Regarding Claim 14 Wedajo-Nishimura-Zou Claim 14 recites limitations corresponding to those of claim 1, but in a travel environment decision method form rather than a travel environment decision apparatus. Accordingly, the rationale ser forth with respect to claim 1 applies equally to claim 14. With respect to the limitation “By a computer,” (Nishimura Col, 7, lines 42-51 “A computer 200 includes, for example, a Central Processing Unit (CPU) 201 ….The CPU 201 is a computational device that reads out programs, data, or the like stored in the ROM 203, the storage device 204, or the like on the 202 and executes processing to realize functions of the computer 200.”) Regarding Claim 15 Wedajo-Nishimura-Zou Claim 15 recites limitations corresponding to those of claim 1, but A non-transitory computer readable medium rather than a travel environment decision apparatus. Accordingly, the rationale ser forth with respect to claim 1 applies equally to claim 15. With respect to the limitation “A non-transitory computer readable medium storing a program for causing a computer to execute: (Nishimura, col, 4, lines 59-63 “a non-transitory computer readable storage medium storing a program for causing a computer to execute the information processing …”) Claim Rejections - 35 USC § 103 Claims 4-8 are rejected under 35 U.S.C. 103 as being unpatentable over Wedajo-Nishimura-Zou further in view of SAKATA KATSUMI (JP-2007328630-A) (translation provided and citation given from the translated document) hereinafter “Sakata”. Regarding Claim 4 Wedajo-Nishimura-Zou-Sakata Wedajo-Nishimura-Zou-Sakata disclose (currently amended) The travel environment decision apparatus according to claim 3, wherein the position associated with the road is a position of a vanishing point on the road in the photographed image. (Sakata, [0057] “The vanishing point recognizing unit 12a performs processing for recognizing the vanishing point in the screen using an optical flow or the like. Then, the background area dividing unit 12 divides the area above the vanishing point as empty as shown in FIG.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Wedajo-Nishimura-Zou with Sakata to create the system of Wedajo-Nishimura-Zou as outlined above “the position associated with the road is a position of a vanishing point on the road in the photographed image,” as suggested by Sakata. The reasoning is that “detection accuracy is improved and the processing load is reduced by detecting candidate regions” (Sakata,[0042]) Note: The motivation that was utilized in the rejection of claim 4, applies equally as well to claims 5, 6, and 8. Regarding Claim 5 Wedajo-Nishimura-Zou-Sakata Wedajo-Nishimura-Zou -Sakata disclose (currently amended) The travel environment decision apparatus according to claim 1 , wherein the analysis processing with respect to the photographed image, further includes determining a second region being a region associated with a subject of a predetermined type other than the sky in the photographed image, and determining, as the reference region, the first region and the second region, in a region above a position associated with the road in the photographed image. (Sakata, [0054] discloses “the background region dividing unit 12 divides the image processed by the preprocessing unit 11 into three background regions, for example, the sky, the road, and the road outside, for each background. Then, the reference pattern selection unit 14 selects a reference pattern to be used by the pedestrian candidate region detection unit 19 depending on from which background region the determination region is cut out” Sakata ¶[0057] states: “The vanishing point recognizing unit 12a …divides the area above the vanishing point as empty as shown in FIG.” Sakata ¶[0058] continues: “The lane recognition unit 12…divides the area inside the lane as the inside of the road as shown in FIG. 5, and sets the outside of the lane as the outside of the road.”) Regarding Claim 6 Wedajo-Nishimura-Zou-Sakata Wedajo-Nishimura-Zou -Sakata disclose The travel environment decision apparatus according to claim 5, Wherein the second region includes a region associated with at least one of an obstacle and a structure (Sakata, [0040] “The collision determination unit 20 predicts a collision between a pedestrian or another vehicle and the host vehicle using the recognition results obtained by the vehicle recognition unit 16, the white line recognition unit 17, the pedestrian recognition unit 18, and the position information output by the navigation device 30.” Wedajo ¶[0015], ¶[0017] further discloses determining whether the vehicle is inside a structure, explaining that [0015]“…If in both image areas a low average brightness is determined, there is a raised likelihood that the presence of a tunnel is correctly assumed“ and also ¶[0016] explains “Moreover, the second image area arranged below the first image area allows for distinguishing a bridge traversing the road on which the vehicle travels from a tunnel.”) Regarding Claim 8 Wedajo-Nishimura-Zou-Sakata Wedajo-Nishimura-Zou-Sakata disclose (currently amended) The travel environment decision apparatus according to claim 5, Wherein whether the vehicle is present within the structure is decided, based on whether surroundings of the first region are surrounded by the second region in the reference region. (Wedajo, [0002] “if the brightness of the obliquely upper image area is below a threshold value of brightness, the number and size of particularly bright areas within this image area are determined. If the number and size of the particularly bright areas in this image area are larger than or equal to a threshold value, it is concluded that these bright areas are light sources mounted in the tunnel.”) Claim Rejections - 35 USC § 103 Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Wedajo-Nishimura-Zou-Sakata further in view of Shimoura et al. (US-5638116-A) hereinafter “ Shimoura ”. Regarding Claim 16 Wedajo-Nishimura-Zou-Sakata Wedajo-Nishimura-Zou -Sakata discloses 16. (new): The travel environment decision apparatus according to claim 4, Wedajo-Nishimura-Zou-Sakata do not explicitly disclose wherein the processor is configured to execute the instructions to determine the vanishing point on the road by using a line extending in parallel to the road along which the vehicle travels. However, in the same field of endeavor Shimoura discloses more explicitly the following: wherein the processor is configured to execute the instructions to determine the vanishing point on the road by using a line extending in parallel to the road along which the vehicle travels. (Shimoura, Col, 10, lines 23-27, Col, 13, lines 54-57 “…the road vanishing point can be obtained by extracting, from an image acquired by the on-vehicle camera, two road parallel lines which lengthen in parallel with the road in the image plane, and by obtaining the intersection of the two road parallel line.” “The road vanishing point calculation processing is a processing for obtaining the vanishing point in the image, which is determined as the intersection of right and left edges of the road, on which the vehicle is moving. The road vanishing point is calculated based on the line candidate points obtained by the line candidate point extraction processing.” Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Wedajo-Nishimura-Zou -Sakata in view of Shimoura to create the system of Wedajo-Nishimura-Zou-Sakata as outlined above in order to provide “wherein the processor is configured to execute the instructions to determine the vanishing point on the road by using a line extending in parallel to the road along which the vehicle travels.” as suggested by Shimoura. The reasoning is that this modification “makes it possible to improve the calculation accuracy of the road vanishing point.” (Col, 23, lines 36-37) Claim Rejections - 35 USC § 103 Claims 9 and 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wedajo-Nishimura-Zou further in view of OTSUKI TOMOE (JP-2019211822-A) (translation provided and citation given from the translated document) hereinafter “ Otsuki ”. Regarding Claim 9 Wedajo-Nishimura-Zou-Otsuki Wedajo-Nishimura-Zou -Sakata (currently amended) The travel environment decision apparatus according to claim 1 Wedajo-Nishimura-Zou do not explicitly disclose wherein the first region is determined by using a learning model in which the photographed image is input, and region information for dividing into each region included in the photographed image is output. However, in the same field of endeavor Otsuki discloses more explicitly the following: the first region is determined by using a learning model in which the photographed image is input, and region information for dividing into each region included in the photographed image is output. (Otsuki,[0043] “…The dividing unit 23 divides the image 100 input from the acquiring unit 22 into a plurality of divided regions, and outputs the image of each divided region to the road surface determining unit 24. The road surface determination unit 24 reads the road surface image machine learning model 31 …inputs the image of each divided region input from the division unit 23 to the road surface image machine learning model 31.”[0044] “ The road surface machine learning model 31 conceptually includes a processing layer having a three-layer structure of an input layer 31a, …, and an output layer 31c,…provided.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Wedajo-Nishimura-Zou -Sakata with Otsuki to create the system of Wedajo-Nishimura-Zou -Sakata as outlined above, in order to implement that “the first region is determined by using a learning model in which the photographed image is input, and region information for dividing into each region included in the photographed image is output.” as suggested by Otsuki. This modification enhances the ability to accurately determine an unsuitable traveling area or a suitable traveling area on a road on which a vehicle travels” (Otsuki, [0005]) Regarding Claim 10 Wedajo-Nishimura-Zou-Otsuki Wedajo-Nishimura-Zou-Otsuki disclose The travel environment decision apparatus according to claim 9, wherein the learning model is one of a plurality of learning models according to an attribute of a road, and the first region is determined by using the learning model, among the plurality of learning models, according to an attribute of a road along which the vehicle travels. (Otsuki, [0036] “The learning unit 21 acquires a plurality … of road surface images …and generates and stores a road surface image machine learning model 31 using the road surface images as teaching materials …” [0040] “The learning unit 21 supervises and machine-learns the features such as the color of the road surface, the color, the shape, and the arrangement of the road surface for each of the road surface images…”). Claim Rejections - 35 USC § 103 Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Wedajo-Nishimura-Zou further in view of HOGI KENTA (JP-2019211822-A) ) (translation provided and citation given from the translated document) hereinafter “Hogi”. Regarding Claim 11 Wedajo-Nishimura-Zou-Hogi Wedajo-Nishimura-Zou disclose The travel environment decision apparatus according to claim 1, Wherein the processor is configured to execute the instructions to: (Nishimura Col, 7, lines 42-51 “A computer 200 includes, for example, a Central Processing Unit (CPU) 201 ….The CPU 201 is a computational device that reads out programs, data, or the like stored in the ROM 203, the storage device 204, or the like on the RAM 202 and executes processing to realize functions of the computer 200.”) Wedajo-Nishimura-Zou do not explicitly disclose acquire time-series photographed image including the photographed, determine the first region in each of the time-series photographed images by performing the analysis processing with respect to each of the time-series photographed images, decide whether each of the time-series photographed images is a structure-inside image photographed within a structure, based on the first region determined in each of the time-series photographed images, and decide whether the vehicle is present within the structure, based on a decision result regarding each of the each time-series photographed images. However, in the same field of endeavor Hogi discloses more explicitly the following: acquire time-series photographed image including the photographed image, (Hogi, [0086] “…in a state where no dark part is detected (the dark part flag DF is off), the far brightness FB is lower than 50, … the near brightness NB is 56. … when a state where the sky brightness SB is lower than 80 is detected continuously for 5 cycles, it is determined that the dark part is detected,…”) determine the first region in each of the time-series photographed images by performing the analysis processing with respect to each of the time-series photographed images, (Hogi, [0096] “In the surrounding darkness detection process, …the predetermined condition is satisfied for the table setting number TSN continues for a predetermined time (5 cycles). Also in the detection process, the setting of the dark portion flag DF is changed only when a predetermined condition is satisfied for the near brightness NB, the far brightness FB, and the sky brightness SB for a predetermined time (5 cycles).”) decide whether each of the time-series photographed images is a structure-inside image photographed within a structure, based on the first region determined in each of the time-series photographed images, (Hogi,[0086] “ In this way, in a state where no dark part is detected (the dark part flag DF is off), the far brightness FB is lower than 50, the far brightness FB is lower than the far brightness PFB one cycle before, and the near brightness NB is 56. As described above, when a state where the sky brightness SB is lower than 80 is detected continuously for 5 cycles, it is determined that the dark part is detected, and the dark part flag DF is set to on.”) and decide whether the vehicle is present within the structure, based on a decision result regarding each of the each of the time-series photographed images. (Hogi,[0088] “On the other hand, if YES is determined in step 300, it is determined in step 335 whether the far brightness FB is 50 or more, the near brightness NB is lower than 56, or the sky brightness SB is 80 or more.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Wedajo-Nishimura-Zou with Hogi to create the system of Wedajo-Nishimura-Zou as outlined above in order to acquire time-series photographed image, determine the first region through analysis of the photographed images, and decide whether the vehicle is present within structure, as suggested by Hog. The reasoning is that detection accuracy is improved by detecting dark portions based on the far luminance and the sky luminance. (Hogi, [0012]) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASTEWAYE GETTU ZEWEDE whose telephone number is (703)756-1441. The examiner can normally be reached Mo-Fr 8:30 am to 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASTEWAYE GETTU ZEWEDE/Examiner, Art Unit 2481 /WILLIAM C VAUGHN JR/Supervisory Patent Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Sep 04, 2025
Non-Final Rejection — §103
Dec 11, 2025
Response Filed
Feb 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598390
CONTROL APPARATUS, IMAGING APPARATUS, AND LENS APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587663
SLIDING-WINDOW RATE-DISTORTION OPTIMIZATION IN NEURAL NETWORK-BASED VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12537980
Attention Based Context Modelling for Image and Video Compression
2y 5m to grant Granted Jan 27, 2026
Patent 12470842
MULTIFOCAL CAMERA BY REFRACTIVE INSERTION AND REMOVAL MECHANISM
2y 5m to grant Granted Nov 11, 2025
Patent 12470679
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND DISPLAY SYSTEM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+37.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month