Prosecution Insights
Last updated: April 19, 2026
Application No. 18/731,355

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §103
Filed
Jun 03, 2024
Examiner
ZEWEDE, ASTEWAYE GETTU
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
36 granted / 45 resolved
+22.0% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
18 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
0.7%
-39.3% vs TC avg
§103
67.0%
+27.0% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims 2. This Office Action is in response to the amendment filed on 01/25/2026. Claims 1, 12, and 14 have been amended. Claims 8-11 have been cancelled. Claims 21-24 have been newly added. Accordingly, claims 1-7, and 12-24 are currently pending for examination. Priority 3. Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)- (d), which have been placed of record in the file. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on 06/03/2024 and 02/26/2026 filed in accordance with the provisions of 37 CFR 1.97. Accordingly, it is being considered by the examiner. Response Amendments 5. Applicant’s Amendment filed on January 25, 2026 has been entered and made of record. Response to Arguments 6. Applicant’s arguments, see Remarks, Pages 6-9, filed January 25, 2026, with respect to the rejection(s) of claim(s) 1, and 14 have been fully considered and are persuasive with the respect to the prior ground of rejection. Accordingly, the rejection is withdrawn. However, upon further consideration of the amended subject matter, a new ground of rejection is made in view of MINOURA, (US-20220036051-A1). In regards to applicant’s argument on page 6, that Yu does not teach the controller. Again, it is the position of the examiner that this feature item 111 teaches the controller configured to generate an evaluation result obtained by evaluating a degree of a field of view of a security camera disposed in a predetermined space being shielded due to an object, (Yu, [0037). The newly added reference Minoura discloses weights applied to regions within the predetermined space based on an intensity of monitoring of each of the regions; (Minoura, [0038] “…, the proportions of the persons P in the region r11 and the region r14 are as small as 0.0, so that weightings for the particles p corresponding to the regions r11 and r14 are minimized on the other hand, the proportions of the persons P in the regions r12, r13 and r15 are 0.7, 0.3 and 1.0, respectively, so that weightings for the particles p corresponding to the regions r12, r13 and r15 become larger values as the proportions of the persons P increase.”) object data regarding the object that moves within the predetermined space. (Minoura [0036] “…, the amount-of-activity calculating unit 12 predicts the amount of movement of the particles p in each region r set on the first captured image R1…, assuming that the particle p moves based on a linear motion with constant velocity,…, the amount-of-activity calculating unit 12 predicts the position of the particle p when the second captured image R2 is captured….”). Which makes applicant’s argument mute in light of the newly added reference. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness 8. Claims 1-2, 6, 14-15, 20, 21, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al (US 20240276106 A1) hereinafter “Yu” in view of MINOURA, (US-20220036051-A1) hereinafter “Minoura” Regarding claim 1 Yu-Minoura Yu discloses 1. (Currently Amended) An information processing apparatus (Yu, [0017] “FIG. 1A illustrates a block diagram of a system 100 for detecting and/or assessing occlusion within a camera field of view.”) comprising: a controller (item 111) configured to generate an evaluation result obtained by evaluating a degree of a field of view of a security camera disposed in a predetermined space being shielded due to an object, (Yu, [0037] “…detecting or otherwise determining occlusion within a camera field of view. Such a determination may be used to, e.g., evaluate object recognition that may have been performed while there was occlusion in the camera field of view, to control how object recognition is performed, …FIGS. 4A and 4B depict an example method 400 for determining occlusion within a camera field of view. The method 400 may be performed by a computing system, such as by the control circuit 111 of the computing system 110 of FIGS. 1A-1C and FIG. 2” on a basis of: structure data regarding a structure existing in the predetermined space; (Yu, “…determining occlusion within a camera field of view, such as by detecting occlusion within the camera field of view, assessing a level of occlusion with the camera field of view, and/or any other aspect of occlusion analysis. The occlusion may refer to, e.g., a situation in which a location in the camera field of view, or a portion of a region surrounding the location, is blocked or close to being blocked from being viewed or otherwise being sensed by a camera…, the target feature may be a corner or edge of an object or a surface thereof at that region, or may be a visual feature disposed on the surface. The presence of the occluding object may affect an ability to identify the target feature, and/or affect an accuracy of such an identification…”) Yu does not explicitly disclose weights applied to regions within the predetermined space based on an intensity of monitoring of each of the regions; object data regarding the object that moves within the predetermined space. However, in the same field of endeavor Minoura discloses more explicitly the following: weights applied to regions within the predetermined space based on an intensity of monitoring of each of the regions; (Minoura, [0038] “…, the proportions of the persons P in the region r11 and the region r14 are as small as 0.0, so that weightings for the particles p corresponding to the regions r11 and r14 are minimized on the other hand, the proportions of the persons P in the regions r12, r13 and r15 are 0.7, 0.3 and 1.0, respectively, so that weightings for the particles p corresponding to the regions r12, r13 and r15 become larger values as the proportions of the persons P increase.”) object data regarding the object that moves within the predetermined space. (Minoura [0036] “…, the amount-of-activity calculating unit 12 predicts the amount of movement of the particles p in each region r set on the first captured image R1…, assuming that the particle p moves based on a linear motion with constant velocity,…, the amount-of-activity calculating unit 12 predicts the position of the particle p when the second captured image R2 is captured….”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Yu with Minoura to create the system of Yu as outlined above such that “weights applied to regions within the predetermined space based on an intensity of monitoring of each of the regions;” as suggested by Minoura. The reasoning is that “used for anomaly detection, security, and marketing activity in a target space...” (Minoura, [0025]) Note: The motivation that was utilized in the rejection of claim 1, applies equally as well to claims 2, 6, 14-15, 20, 21, and 23. Regarding claim 2 Yu-Minoura Yu-Minoura discloses 2. The information processing apparatus according to claim 1, wherein the controller calculates an evaluation value that becomes higher as a ratio of a region shielded by the object is smaller within the field of view of the security camera, as the evaluation result. (Yu, [0073] “…, the method 400 may further include a step 414, in which the control circuit 111 determines a value of an object recognition confidence parameter based on the size of the occluding region. In some cases, the value of the object recognition confidence parameter may have an inverse relationship with the size of the occluding region. For instance, an increase in the size of the occluding region may cause the value of the object recognition confidence parameter to change in a direction that indicates less confidence in an accuracy of an object recognition operation which has been performed or is being planned. In an embodiment, the control circuit 111 may be configured to determine the value of the object recognition confidence parameter by determining a ratio between the size of the occluding region … and a size of the 2D region … determined in step 406, or an inverse of the ratio.”). Regarding claim 6 Yu-Minoura Yu-Minoura discloses 6. The information processing apparatus according to claim 1, wherein the object data is data that defines movement of a plurality of the objects over time. (Minoura, [0029] “…, the person detecting unit 11 (a detecting unit) accepts captured images of the space R captured by the camera C at constant time intervals. For example, the person detecting unit 11 accepts …a captured image of the space R in which a plurality of persons P exist as shown in FIG. 4. ”) Regarding claim 14 Yu-Minoura Yu discloses 14. (Currently Amended) An information processing method to be executed by an information processing apparatus, (Yu, [0009] “…method for determining occlusion within a camera field of view,”),the information processing method comprising: a step of acquiring structure data regarding a structure existing in a predetermined space; (Yu,“…determining occlusion within a camera field of view, such as by detecting occlusion within the camera field of view, assessing a level of occlusion with the camera field of view, and/or any other aspect of occlusion analysis. The occlusion may refer to, e.g., a situation in which a location in the camera field of view, or a portion of a region surrounding the location, is blocked or close to being blocked from being viewed or otherwise being sensed by a camera…, the target feature may be a corner or edge of an object or a surface thereof at that region, or may be a visual feature disposed on the surface. The presence of the occluding object may affect an ability to identify the target feature, and/or affect an accuracy of such an identification…”) . . .; and a step of generating an evaluation result by evaluating a degree of a field of view of a security camera disposed in the predetermined space being shielded due to the object on a basis of the structure data and the object data. (Yu, [0037]“…detecting or otherwise determining occlusion within a camera field of view. Such a determination may be used to, e.g., evaluate object recognition that may have been performed while there was occlusion in the camera field of view, to control how object recognition is performed, …FIGS. 4A and 4B depict an example method 400 for determining occlusion within a camera field of view. The method 400 may be performed by a computing system, such as by the control circuit 111 of the computing system 110 of FIGS. 1A-1C and FIG. 2”) Yu does not explicitly disclose a step of acquiring object data regarding an object that moves within the predetermined space; wherein generating the evaluation result is based on weights applied to regions within the predetermined space based on an intensity of monitoring of each of the regions. However, in the same field of endeavor Minoura discloses more explicitly the following: a step of acquiring object data regarding an object that moves within the predetermined space; (Minoura, [0036] “…, the amount-of-activity calculating unit 12 predicts the amount of movement of the particles p in each region r set on the first captured image R1…, assuming that the particle p moves based on a linear motion with constant velocity,…, the amount-of-activity calculating unit 12 predicts the position of the particle p when the second captured image R2 is captured….”) wherein generating the evaluation result is based on weights applied to regions within the predetermined space based on an intensity of monitoring of each of the regions; (Minoura, [0038] “…, the proportions of the persons P in the region r11 and the region r14 are as small as 0.0, so that weightings for the particles p corresponding to the regions r11 and r14 are minimized on the other hand, the proportions of the persons P in the regions r12, r13 and r15 are 0.7, 0.3 and 1.0, respectively, so that weightings for the particles p corresponding to the regions r12, r13 and r15 become larger values as the proportions of the persons P increase.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Yu with Minoura to create the system of Yu as outlined above such that “weights applied to regions within the predetermined space based on an intensity of monitoring of each of the regions;” as suggested by Minoura. The reasoning is that “used for anomaly detection, security, and marketing activity in a target space..” (Minoura, [0025]) Regarding claim 20 Yu-Minoura Yu-Minoura discloses 20. A non-transitory storage medium storing a program for causing a computer to execute (Yu, discloses [0029] In an embodiment, the non-transitory computer-readable medium 115 may include an information storage device, such as computer memory. The computer memory may include, e.g., dynamic random access memory (DRAM), solid state integrated memory, and/or a hard disk drive (HDD). In some cases, determining the occlusion within the camera field of view may be implemented through computer-executable instructions (e.g., computer code) stored on the non-transitory computer-readable medium 115. In such cases,) the information processing method according to claim 14. Regarding claim 21 Yu-Minoura Yu-Minoura discloses 1. (New) The information processing apparatus according to claim 1, wherein the controller is configured to generate the evaluation result further on the basis of information about movement of the object. (Minoura, [0051]”…, the measurement device 10 evaluates the amount of movement of the particle p predicted as described above based on the proportion of the person P of each region r on the second captured image R2 (step S5). To be specific, the measurement device 10 gives less weight to the particle p associated with the region r where the proportion of the person P is smaller, and gives more weight to the particle p associated with the region r where the proportion of the person P is larger. Then, the measurement device 10 resets the particle p based on the weight of the particle p (step S6). Herein, the measurement device 10 eliminates the particle p or increases the particle p in accordance with the value of the weight of the particle p as shown in FIGS. 8 and 9.”) Claim Rejections - 35 USC § 103 7. Claims 3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Yu-Minoura in view of Milanfar et al (US-9253375-B2) hereinafter “Milanfar”. Regarding claim 3 Yu-Minoura-Milanfar Yu-Minoura discloses 3. The information processing apparatus according to claim 2, Yu-Minoura do not explicitly disclose wherein the controller calculates an evaluation value that becomes higher as a ratio of a period during which at least part of the field of view of the security camera is shielded by the object is smaller with respect to a predetermined time width, as the evaluation result. However, in the same field of endeavor Milanfar discloses more explicitly the following: wherein the controller calculates an evaluation value that becomes higher as a ratio of a period during which at least part of the field of view of the security camera is shielded by the object is smaller with respect to a predetermined time width, as the evaluation result. (Milanfar Col. 1, lines 21-34, “The method can also include comparing one or more parameters of the image with one or more control parameters, where the control parameters include information indicative of an image from a substantially unobstructed camera. Based on the comparison, the method can also include determining a score between the one or more parameters of the image and the one or more control parameters. The method can also include accumulating, by a computing device, a count of a number of times the determined score exceeds a first threshold. Based on the count exceeding a second threshold, the method can also include determining that the camera is at least partially obstructed.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Yu-Minoura with Milanfar to create the system of Yu-Minoura as outlined above in order to incorporate a controller that “calculates an evaluation value that becomes higher as a ratio of a period during which at least part of the field of view of the security camera is shielded by the object is smaller with respect to a predetermined time width, as the evaluation result.” as suggested by Milanfar one of ordinary skill in the art would have motivated to incorporate the temporal accumulation and time-based evaluation feature of Milanfar into Yu-Hann’s visibility evaluation system “ in order to track movement of a detected object throughout a surveillance region.” (Hanna, claim 10) Note: The motivation that was utilized in the rejection of claim 3, applies equally as well to claim 16. Claim Rejections - 35 USC § 103 8. Claim 4-5, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yu-Minoura in view of Mittal et al (US-20080007720-A1) hereinafter “Mittal”. Regarding claim 4 Yu-Minoura-Mittal Yu-Minoura discloses 4. The information processing apparatus according to claim 2, Yu-Minoura do not explicitly disclose wherein the controller integrates a plurality of the evaluation values respectively corresponding to a plurality of security cameras and evaluates an arrangement pattern of the plurality of security cameras. However, in the same field of endeavor Mittal discloses more explicitly the following: wherein the controller integrates a plurality of the evaluation values respectively corresponding to a plurality of security cameras and evaluates an arrangement pattern of the plurality of security cameras. (Mittal, [0048]“This analysis yields a capture quality measure for each location and each angular orientation for a given sensor configuration. Such quality measure is integrated across the entire region of interest in order to obtain a quality measure for the given configuration.” [0049] “The analysis presented so far yields a function q.sub.S(x, Ɵ), that refers to the capture quality of an object with orientation .theta. at location x given that the sensors have the parameter vector s. Such parameter vector may include, for instance, the location, viewing direction and zoom of each camera. Given such a function,…in order to evaluate a given set of sensor parameters with regard to the entire region to be viewed. … The sensor planning problem can then be formulated as a problem of constrained optimization of the cost function.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Yu-Minoura with Mittal to create the system of Yu-Minoura as outlined above in order to “integrate a plurality of the evaluation values respectively corresponding to a plurality of security cameras and evaluates an arrangement pattern of the plurality of security cameras.” as suggested by Mittal. Furthermore, one of ordinary skill in the art would have been motivated to incorporate integration of a plurality of the evaluation values corresponding to multiple security cameras and evaluation of their arrangement pattern into Yu-Hann’s system “in order to capture the objects from many directions.” (Mittal, [0059]) Note: The motivation that was utilized in the rejection of claim 4, applies equally as well to claims 5, 17, and 18. Regarding claim 5 Yu-Minoura-Mittal Yu-Minoura-Mittal discloses 4. The information processing apparatus according to claim 4, wherein the controller determines arrangement of the plurality of security cameras to an arrangement pattern such that a sum of the plurality of evaluation values exceeds a predetermined value. (Mittal, [0048] “This analysis yields a capture quality measure for each location and each angular orientation for a given sensor configuration. Such quality measure is integrated across the entire region of interest in order to obtain a quality measure for the given configuration.” i.e., it integrates (summing up) all captured quality values from multiple sensors to generate an overall quality score for the camera arrangement, which corresponds to determining an arrangement pattern of the plurality of cameras such that the sum of evaluation values exceeds a predetermined value) Regarding claim 8-11 Canceled Claim Rejections - 35 USC § 103 10. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Yu-Minoura in view of Bernal et al (US-9641763-B2) hereinafter “Bernal”. Regarding claim 12 Yu-Minoura-Bernal Yu-Minoura discloses 12. (Currently Amended) The information processing apparatus according to claim [[8]] 1, Yu-Minoura do not explicitly disclose wherein the object data further includes data regarding hours during which the object exists in the predetermined space. However, in the same field of endeavor Bernal discloses more explicitly the following: wherein the object data further includes data regarding hours during which the object exists in the predetermined space. (Bernal, Col. 6, lines 22-28 “The global timing module 66 measures 90 the time taken by the tracked object to traverse the area of interest 10, or the length of stay of the object within the whole area of interest 10. It uses the outputs from the local timing and global tracking modules. It relies on knowledge of synchronization data between the multiple video feeds to achieve timing data normalization across multiple cameras.”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Yu-Minoura with Bernal to create the system of Yu-Minoura as outlined above in order to “includes data regarding hours during which the object exists in the predetermined space.” as suggested by Bernal. Furthermore, one of ordinary skill in the art would have been motivated to incorporate Bernal’s time-based object tracking technique into Yu-Minoura’s system “for improved video-based methods for object tracking and timing across multiple camera views.” (Bernal, Col. 1, lines 58-60) Claim Rejections - 35 USC § 103 11. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Yu-Minoura-Bernal in view of Rao et al (US-11756339-B2) hereinafter “Rao” Regarding claim 13 Yu-Minoura-Miyoshius-Bernal-Rao Yu-Minoura-Bernal discloses 13. The information processing apparatus according to claim 12, Yu-Minoura-Miyoshius-Bernal do not explicitly disclose wherein the controller executes calculation of the evaluation value for each of hours. However, in the same field of endeavor Rao discloses more explicitly the following: wherein the controller executes calculation of the evaluation value for each of hours. (Rao Col. 5, lines 24-30 “For each section/area where dwell-time needs to be determined and computed….one or more cameras are deployed. One or more workers can obtain video feed from each of these camera(s) and compute the dwell-time for individuals in that section/area…”) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Yu-Minoura with Rao to create the system of Yu-Minoura as outlined above in order to “executes calculation of the evaluation value for each of hours” as suggested by Rao. Furthermore, one of ordinary skill in the art would have motivated to incorporate Rao feature of executes calculation of the evaluation value for each of hours into Yu-Hann’s system “in order to track movement of a detected object throughout a surveillance region.” (Hanna, claim 10) Claim Rejections - 35 USC § 103 12. Claims 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Yu-Minoura in view of Guerreiro (US-20180197039-A1) hereinafter “Guerreiro”. Regarding claim 22 Yu-Minoura-Guerreiro Yu-Minoura discloses 22. (New) The information processing apparatus according to claim 1, wherein the controller is configured to generate the evaluation result further on the basis of: Yu-Minoura does not explicitly discloses calculation of a reflection of ambient light in the predetermined space; (Guerreiro, [0043] “a glare region around the respective bright image area is increased as long as the calculated actual gradients match the calculated expected gradients,… until the respective calculated average similarity value is calculated with a stepwise increased distance … the outer boundary of the glare region within the respective image. ”) determination of whether any of the regions of the predetermined space are impossible to monitor based on the calculated reflection of ambient light. (Guerreiro, [0008] “A glare detection method for detection of at least one glare region within an image is presented. The method includes aggregating image pixels of the image having a high luminance intensity into bright image areas within the image. The method also includes calculating, for image pixels around each bright image area, gradients expected in case of a glare and actual gradients. The method further includes increasing a glare diameter of a glare region around the respective bright image area as long as the calculated actual gradients match the calculated expected gradients.” i.e., Glare caused by ambient light reflections produces bright regions in an image that obscure objects and prevent effective monitoring of those regions.) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to modify the teachings of Yu-Minoura in view Guerreiro to create the system of Yu-Minoura as outlined above such that a determination of whether any of the regions of the predetermined space are impossible to monitor based on the calculated reflection of ambient light, as suggested by Guerreiro. One of ordinary skill in the art would have been motivated to incorporate Guerreiro’s glare detection technique into Yu-Minoura’s monitoring system in order to identify image regions affected by glare as described in Guerreiro, (¶[0032].) caused by ambient light reflection, which may obscure objects and hinder monitoring. Regarding claims 15-18, 23-24 Claims 15-18 and 23-24 recite limitations that are substantially similar to those of dependent claims 2-5 and 21-22, respectively, except that claims 15-18, and 23-24 are directed to a method rather than an apparatus. Therefore, the rejection of claims 2-5, and 21-22 applies equally to claim 15-18, and 23-24. Furthermore, for the method Yu ¶[0001] explicitly discloses “…a method and system for determining occlusion within a camera field of view.”) Allowable Subject Matter 13. Claims 7 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Pertinent Prior Art 14. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ` Wang et al US-20190180597-A1 Ku et al US-20060067456-A1 Conclusion 15. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASTEWAYE GETTU ZEWEDE whose telephone number is (703)756-1441. The examiner can normally be reached Mo-Fr 8:30 am to 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASTEWAYE GETTU ZEWEDE/ Examiner, Art Unit 2481 /WILLIAM C VAUGHN JR/ Supervisory Patent Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Jun 03, 2024
Application Filed
Oct 24, 2025
Non-Final Rejection — §103
Jan 25, 2026
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598390
CONTROL APPARATUS, IMAGING APPARATUS, AND LENS APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587663
SLIDING-WINDOW RATE-DISTORTION OPTIMIZATION IN NEURAL NETWORK-BASED VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12537980
Attention Based Context Modelling for Image and Video Compression
2y 5m to grant Granted Jan 27, 2026
Patent 12470842
MULTIFOCAL CAMERA BY REFRACTIVE INSERTION AND REMOVAL MECHANISM
2y 5m to grant Granted Nov 11, 2025
Patent 12470679
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND DISPLAY SYSTEM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+37.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month