Prosecution Insights
Last updated: April 19, 2026
Application No. 18/387,940

INFORMATION PROCESSING DEVICE

Final Rejection §103
Filed
Nov 08, 2023
Examiner
FATIMA, UROOJ
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
16 currently pending
Career history
17
Total Applications
across all art units

Statute-Specific Performance

§101
24.6%
-15.4% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Applicant’s Amendments files on 01/09/2026 has been entered and made of record. Currently pending Claim(s): Independent Claim(s): Canceled Claim(s): Amended Claim(s): 1–3, and 5 1 4 1 and 3 Response to Arguments This office action is responsive to Applicant’s Arguments/Remarks made in an Amendment received on January 9, 2026. In view of the new title amendments and applicant arguments, [Remarks] filed on January 9, 2026, the specification objection has been withdrawn. In view of the new claim amendments and applicant arguments, [Remarks] filed on January 9, 2026, the claim objections to claim 1 have been withdrawn. In view of the new claim amendments and applicant arguments, [Remarks] filed on January 9, 2026, with respect to the 35 U.S.C. 112(b) claim rejections have been carefully considered and the claim rejections to claim 3 under 35 U.S.C. 112(b) are withdrawn. In view of the new claim amendments and applicant arguments, [Remarks] filed on January 9, 2026, with respect to the 35 U.S.C. 101 claim rejections have been carefully considered and the claims rejections to claims 1-3 and 5 under 35 U.S.C. 101 are withdrawn. Applicant’s Reply (January 9, 2026) includes substantive amendments to the claims. This Office action has been updated with new grounds of rejection addressing those amendments. Further Applicant’s Arguments/Remarks with respect to independent claims 1 have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection and the arguments are now rejected by newly cited art ‘Abraham et al. ("Vehicle detection and classification from high resolution satellite images." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2 (2014): 1-8.) (hereafter, “Abraham”)’ as explained in the body of rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over Sugawara et al. (US 2024/0378900A1) (hereafter, "Sugawara") in view of Abraham et al. ("Vehicle detection and classification from high resolution satellite images." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2 (2014): 1-8.) (hereafter, “Abraham”). Regarding claim 1, Sugawara discloses an information processing device (paragraph [0006] “…provide an image generation system and the like…”) comprising at least one processor, wherein the at least one processor (Paragraph [0094] “components of each device may be achieved by general-purpose or dedicated circuitry including a processor”; system 100 in Figure 1) is configured to: acquire [satellite] image (paragraph [0030] “The acquisition unit 101 acquires a road image obtained by imaging a road.”); extract large-sized vehicles, which have a size equal to or greater than a predetermined size value, from the [satellite] image (paragraph [0049] “the degradation level determination unit 102 acquires a parameter relevant to the road image.”; paragraph [0050] “The parameters representing the feature of the environment of the road include a traffic volume, weather information, and regional information.”; paragraph [0051] “The traffic volume may include the amount of heavy vehicles traveling on the road.”); detect, in the [satellite] image (paragraph [0007] “…acquires a road image obtained by imaging of a road.”), [a plurality of roads, and identify a specific] road…on which travel of the large-sized vehicles is equal to or greater than a predetermined travel value (paragraph [ 0049] “…acquires a parameter relevant to the road image.”; paragraph [0050] “The parameters related to the road include parameters representing a feature of the environment... The parameters representing the feature of the environment of the road include a traffic volume, weather information…”; paragraph [0051] “The traffic volume may include the amount of heavy vehicles traveling on the road.”), analyze vehicle data for the [specific] road and obtain an analysis result (paragraph [0048] “The degradation level determination unit 102 acquires a parameter related to a road for prediction”); and predict a state of deterioration of a surface of the [specific] road based on the analysis result (paragraph [0047] “degradation level determination unit 102 may predict the degradation level by giving a parameter related to the road to the prediction formula. The parameter related to the road is a parameter that affects the degradation level.”), wherein the at least one processor is configured to extract, as the large-sized vehicles, at least one of trucks, buses, or vehicles having a predetermined weight or greater (paragraph [ 0049] “…acquires a parameter relevant to the road image.”; paragraph [0050] “The parameters related to the road include parameters representing a feature of the environment... The parameters representing the feature of the environment of the road include a traffic volume, weather information…”; paragraph [0051] “The traffic volume may include the amount of heavy vehicles traveling on the road.”), and wherein the at least one processor predicts the state of deterioration of the surface of the [specific] road by inputting the analysis result into a state of deterioration prediction model that is trained with a dataset including an actual degree of deterioration of roads and obtaining the analysis result from the state of deterioration prediction model (paragraph [61-63] “the imaging range of the stored image may be a portion of the road with road degradation…the generation unit 103 may superimpose the stored image on the road image. Alternatively, the generation unit 103 may superimpose a road degradation portion of the stored image on the road of the road image. The generation unit 103 may generate a predictive image by inputting a road image to a learning model generated by machine learning. For example, a Generative Adversarial Network (GAN) may be used as the learning model. For example, Cycle-GAN or Pix2Pix may be used among the GANs. The stored image described above may be used to generate the learning model. However, Sugawara fails to teach detect, in the satellite image, a plurality of roads, and identify a specific road from among the plurality of roads. Abraham teaches detect, in the satellite image, a plurality of roads, and identify a specific road from among the plurality of roads (Methodology page 2 [right column paragraph 1] “Image pre-processing is the first stage where a satellite image is converted into a greyscale image. Secondly, region of interest having roadways alone is extracted from the broad area image.”). Figure 21: PNG media_image1.png 622 548 media_image1.png Greyscale Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Sugawara’s reference to include detect, in the satellite image, a plurality of roads, and identify a specific road from among the plurality of roads taught by Abraham’s reference. The motivation for doing so would have been to classify and count the vehicles within the road selected as the region of interest as suggested by Abraham (see Abraham, Methodology page 2 [right column paragraph 1], and Figure 5). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Abraham with Sugawara to obtain the invention specified in claim 1. Regarding claim 2, Sugawara discloses the information processing device as claimed in claim 1, wherein the at least once processor is configured to acquire weather information associated with the acquired [satellite] image (paragraph [0049] “…acquires a parameter relevant to the road image.”; paragraph [0050] “The parameters related to the road include parameters representing a feature of the environment... The parameters representing the feature of the environment of the road include a traffic volume, weather information…”); and based on the acquired weather information, the at least one processor is configured to acquire a [satellite] image captured on a clear-weather day (Sugawara: paragraph [0086] “…according to the weather at the time of capturing the road image. The weather includes, for example, sunny, cloudy, and rainy”). However, Sugawara fails to teach acquire a satellite image. Abraham teaches acquire a satellite image (Subsection 2.1: Preprocessing the Input Image, page 2 [right column paragraph 1] “Satellite images are obtained as panchromatic (greyscale), natural colour (RGB) and multispectral bands.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Sugawara’s reference to include acquire a satellite image taught by Abraham’s reference. The motivation for doing so would have been to obtain detailed information about objects from an aerial image as suggested by Abraham (see Abraham, Introduction page 1 [left column paragraph 1]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Abraham with Sugawara to obtain the invention specified in claim 2. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Sugawara et al. (US 2024/0378900A1) (hereafter, "Sugawara") in view of Abraham et al. ("Vehicle detection and classification from high resolution satellite images." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2 (2014): 1-8.) (hereafter, “Abraham”) further in view of Jiang et al. (US 2020/0191601 A1) (hereafter, “Jiang”). Regarding claim 3, which claim 1 is incorporated, Sugawara discloses the at least one processor is configured to acquire weather information associated with the acquired [satellite] image (paragraph [0049] “…acquires a parameter relevant to the road image.”; paragraph [0050] “The parameters related to the road include parameters representing a feature of the environment... The parameters representing the feature of the environment of the road include a traffic volume, weather information…”); and the at least one processor is configured to determine a numerical value of the predicted state of deterioration of the surface of the [specific] road (paragraph [0047] “degradation level determination unit 102 may predict the degradation level by giving a parameter related to the road to the prediction formula. The parameter related to the road is a parameter that affects the degradation level.”), However, Sugawara fails to teach acquire a satellite image and a specific road. Abraham teaches acquire a satellite image (Subsection 2.1: Preprocessing the Input Image, page 2 [right column paragraph 1] “Satellite images are obtained as panchromatic (greyscale), natural colour (RGB) and multispectral bands.”) and a specific road ((Methodology page 2 [right column paragraph 1] “Image pre-processing is the first stage where a satellite image is converted into a greyscale image. Secondly, region of interest having roadways alone is extracted from the broad area image.”, Figure 21.). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Sugawara’s reference to include acquire a satellite image taught by Abraham’s reference. The motivation for doing so would have been to obtain detailed information about objects from an aerial image and classify and count the vehicles within the road selected as the region of interest as suggested by Abraham (see Abraham, Introduction page 1 [left column paragraph 1]; Methodology page 2 [right column paragraph 1]; and Figure 5). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. However, Sugawara and Abraham fail to teach apply, [to the predicted state of deterioration of the surface of the specific road], a numerical weighting factor based on the weather information. Jiang teaches apply, [to the predicted state of deterioration of the surface of the specific road], a numerical weighting factor based on the weather information (paragraph [0072] “map component 610 may assign a lower weight (e.g., weight value, scaling factor, etc.) to report messages that were received during night time and may assign a higher weight to report messages that were received during the day time (or other time when there is more sunlight in the environment).”; paragraph [0073] “The map component 610 may assign a lower weight (e.g., weight value, scaling factor, etc.) to report messages that were received during certain weather conditions (e.g., inclement or bad weather conditions such as rain, fog, clouds, etc.) and may assign a higher weight to report messages that were received during better weather conditions (e.g., sunlight).”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Sugawara in view of Abraham to include apply, [to the predicted state of deterioration of the surface of the specific road], a numerical weighting factor based on the weather information taught by Jiang’s reference. The motivation for doing so would have been assign a lower or higher weight to images taken during different weather conditions depending on their clarity as suggested by Jiang (see Jiang paragraph [0072] and paragraph [0073]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Jiang with Sugawara and Abraham to obtain the invention specified in claim 3. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sugawara et al. (US 2024/0378900A1) (hereafter, "Sugawara") in view of Abraham et al. ("Vehicle detection and classification from high resolution satellite images." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2 (2014): 1-8.) (hereafter, “Abraham”) further in view of Kawabe (JP 2002/133585 A). Regarding claim 5, Sugawara discloses the information processing device as claimed in claim 1, and the at least one processor (Paragraph [0094] “components of each device may be achieved by general-purpose or dedicated circuitry including a processor”; system 100 in Figure 1). However, Sugawara and Abraham fail to teach acquire, from a business that operates large-sized vehicles, data indicating operating days of the business; and acquire a satellite image captured on an operating day of the business which is included in the data. Kawabe in a similar field of endeavor, teaches acquire, from a business that operates large-sized vehicles, data indicating operating days of the business (Kawabe: paragraph [0052] “The traffic congestion information service center 14 according to the present embodiment includes a bus operation plan database…. Here, the bus operation plan database 29 stores bus route information, bus stop IDs to pass, bus operation times, and the like.”); and acquire a satellite image captured on an operating day of the business which is included in the data (Kawabe: paragraph [0054] “The traffic congestion information service center 14 according to the present embodiment obtains, via the communication network 15, image data picked up by the imaging devices 11 and the artificial satellites.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Sugawara in view of Abraham to include acquire, from a business that operates large-sized vehicles, data indicating operating days of the business; and acquire a satellite image captured on an operating day of the business which is included in the data taught by Kawabe’s reference. The motivation for doing so would have been to collect information that is important when driving a vehicle and because the demand for such information is high as suggested by Kawabe (see Kawabe paragraph [0002]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Kawabe with Sugawara and Abraham to obtain the invention specified in claim 5. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Shigeno et al. (JP 2019/185443 A) discloses a system used to predict a degree of deterioration of a road surface. Higure (JP 4,003,827 B2) discloses a system used to predict road surface properties by using a method and system for estimating traffic volume. Zhao et al. (US 2017/0309171 A1) discloses a system and method comprising a plurality of probes used to estimate the traffic volume within a road segment. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UROOJ FATIMA whose telephone number is (571)272-2096. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UROOJ FATIMA/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Nov 08, 2023
Application Filed
Oct 28, 2025
Non-Final Rejection — §103
Dec 08, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Examiner Interview Summary
Jan 09, 2026
Response Filed
Feb 02, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month