Prosecution Insights
Last updated: April 19, 2026
Application No. 18/133,630

METHOD FOR PREDICTING TRAFFIC INFORMATION, APPARATUS FOR PREDICTING TRAFFIC INFORMATION, AND STORAGE MEDIUM STORING INSTRUCTIONS TO PERFORM METHOD FOR PREDICTING TRAFFIC INFORMATION

Non-Final OA §101§103
Filed
Apr 12, 2023
Examiner
PALMARCHUK, BRIAN KEITH
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Research & Business Foundation Sungkyunkwan University
OA Round
3 (Non-Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
8 granted / 10 resolved
+28.0% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
32 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the Applicants’ filing on August 19, 2025. Claims 1-7, 9-11, 13-17 and 19 were previously pending, of which claims 1, 10, 11, 15 and 19 have been amended, claims 3, 9, 14 and 16 have been cancelled, and no claims have been newly added. Accordingly, claims 1, 2, 4-7, 10, 11, 13, 15, 17 and 19 are currently pending and are being examined below. Response to Arguments With respect to Applicant's remarks, see pages 8-13 filed on December 22, 2025; Applicant’s “Amendment and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented. With respect to the rejections under 35 U.S.C. § 101, applicant's arguments and amendments have been fully considered and are not persuasive. Applicant’s the claimed invention is directed to an abstract idea without significantly more, the examiner has not been persuaded to reconsider the grounds of rejection due to the improvement not being fully captured in the claim language in such a way that would suggest a significant reduction of the amount of data transfer. Therefore the argument is not persuasive. The invention merely collects traffic data (data gathering) from images and extrapolates information for further analysis without implementing the data analysis into a practical application. Therefore, the rejection under 35 § U.S.C. 101 is maintained. With respect to the rejections under 35 USC § 103, applicant's arguments have been fully considered but they are not persuasive. In response to applicant's argument that the references fail to show certain features of the invention, the combination of the previously applied prior art does disclose the amended features. It is noted that the features upon which applicant relies (i.e., determining movement directions by checking a length of each movement trajectory and selecting a method of determining each of the movement directions based on a length of each movement trajectory) are clearly defined in the prior art of record and a clarified rejection has been made under 35 USC 103 as presented below. Therefore, the rejection under 35 U.S.C. § 103 is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 2, 4-7, 10, 11, 13, 15, 17 and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The Examiner has identified apparatus Claim 15 as the claim that represents the claimed invention for analysis. Claim 15 recites the limitations of (additional elements emphasized in bold and are considered to be parsed from the remaining abstract idea): A traffic predicting apparatus comprising: a camera device configured to capture a digital road image in which a vehicle moving on a road is included; a memory storing one or more instructions; and a processor executing the one or more instructions stored in the memory, wherein the instructions, when executed by the processor, cause the processor to receive the road image by inputting the plurality of frames to an object detection model trained with a plurality of training frames as an input dataset and a training object for a vehicle as a label dataset, and checking the object for the vehicle output from the object detection model, track the detected objects in the plurality of frames, determine a movement trajectory of each of the tracked objects by checking a length of each movement trajectory, selecting a method of determining each of the movement directions from a set of movement determination methods based on a length of each of the determined movement trajectories, and using the selected method to determine the movement directions, determine the movement trajectory of the tracked object and checking a movement direction corresponding to the movement trajectory of the object, and check a number of vehicles for each movement direction on the basis of the checked movement directions, and transmit, by the transceiver, the number of vehicles for each movement direction to a traffic information management server that performs a traffic information provision service, wherein the object detection model is a model that is trained to be operated in an edge computing environment, and wherein the processor is configured to: select an MOI trajectory-based determining method of determining a movement direction on the basis of a Movement of Interest (MOI) trajectory, when a length of the movement trajectory exceeds a predetermined threshold value; and select an MOI zone-based determining method of determining a movement direction on the basis of an MOI zone, when the length of the movement trajectory does not exceed the predetermined threshold value. which is a process that, under its broadest reasonable interpretation, covers performance of the limitation(s) as a Mental process (concept performed in the human mind) but for the recitation of generic computer elements. For example, a person could watch a road or video of the road, detecting an object for the vehicle, track the object, check a movement trajectory of the tracked object, checking a movement direction, and check a number of vehicles. With respect to Step 2A, Prong II, this judicial exception is not practically integrated. The claim recites the additional elements of “a processor, a memory, and a camera device”. These elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. With respect to Step 2B, the aforementioned additional elements are all generic computer elements have been held to be not significantly more than the abstract idea by Alice. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the additional elements of using the processors to receive information, make decisions, and supply instructions amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Furthermore, the limitation step of “transmit, by the transceiver, the number of vehicles for each movement direction to a traffic information management server”, is not more than the judicial exception, because as detailed in Electric Power Group, additional elements that are used to simply output results do not amount to significantly more than the abstract idea itself. Claims 1 and 19 cite the same limitations as that in claim 15, with the exception of adding more generic computer components, and are therefore also rejected under 35 USC § 101. Claims 2, 4-7, 10, 11, 13 and 17 recite limitations that include further abstract ideas of traffic pattern detection and analysis for training prediction models which can also be performed in the human mind or outputting data and do not integrate the abstract idea into a practical application. Therefore, these claims are also rejected under 35 USC § 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4-7, 13, 15, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al., KR10-2323437B1 (Hereinafter “Jeong”) in view of Mielenz et al US 2018/0216944 A1 (Hereinafter, “Mielenz”), in further view of Morales et al., US 2021/0192748 A1 (Hereinafter, “Morales”). Regarding Claims 1, 15 and 19, Jeong discloses a method of predicting traffic information, the method comprising: receiving, by a traffic information apparatus comprising a camera, a digital road image photographed by the camera of a plurality of vehicles that move on a road; detecting an object for each vehicle in a plurality of frames included in the road image by inputting the plurality of frames to an object detection model trained with a plurality of training frames as an input dataset and a training object for a vehicle as a label dataset, and checking the object for the vehicle output from the object detection model; In [0043], “The above computing system (1000) can derive the traffic volume monitoring results as described above from each of a plurality of video image frames over time for a traffic volume monitoring target area received from a camera or derived internally.” Also [0089]. tracking the object detected objects in the plurality of frames; In [0014], describing a vehicle tracking step for assigning unique identifying information. determining a movement trajectory of each of the tracked objects, determining movement directions corresponding to the movement trajectory trajectories of the objects In [0014], they disclose “ … traffic volume monitoring step of determining an exit direction of a vehicle detected in the vehicle object detection step based on the vehicle tracking information and the location information of a preset boundary area on the video image frame… ” determining a number of vehicles for each movement direction based on the determined movement directions; In [0014], “deriving vehicle exit traffic volume information in at least three directions in the video image frame by considering the determined exit direction.” transmitting, by the traffic information apparatus, the number of vehicles for each movement direction to a traffic information management server that performs a traffic information provision service; See [0040-0041] where they disclose the central server use for collecting traffic data. wherein the object detection model is a model that is trained to be operated in an In [0053], “the computing system includes four or more vehicle object detection models (1400) trained based on deep learning for each of two or more camera positions for each of two or more time zones.” While Jeong discloses the use of servers to train the traffic models, it is not exclusively presented using edge computing. However, Mielenz in [0019] teaches the use of edge computing environment for traffic modeling. It would have been obvious for the applicant to do so before the effective filing date of the claimed invention, to combine Jeong’s method of presenting traffic data for traffic modeling with the Mielenz’s art of edge computing with reasonable expectation of success. The motivation for doing so would have been to enable the traffic system to perform real-time traffic congestion prediction with low latency and local awareness, see NPL ref [Cluster Computing, Aug 10, 2021].” Jeong teaches the use of traffic models, but does not exclusively disclose movement trajectories or MOI methods. However, Morales teaches the use of movement trajectories and MOI based traffic modeling. checking a length of each movement trajectory, selecting a method of determining each of the movement directions from a set of movement determination methods based on a length of each of the determined movement trajectories, and using the selected method to determine the movement directions; See Morales [0041], the learning model application to determine object trajectory based on a predetermined threshold value. “A trajectory template 140 or 142 may represent a classification of intent of future movement ( e.g., predicted direction of future travel, class of motion, etc.) of the associated object 108(1) or 108(2). In [0047], “…the trajectory templates 140(1) and 142(2) may be derived based on cluster assignments ( e.g., assignment of data points to clusters). In some examples, the trajectory templates 140(1) and 142(2) may be determined based on less than a threshold change to a centroid of data points between iterations of applying the clustering algorithm.” Note: It is understood that the templates may be determined based on less than the threshold, an additional determination could be made based on exceeding a threshold based on a centroid of data points. wherein determining the movement directions includes: selecting an MOI trajectory-based determining method of determining a movement direction on the basis of a Movement of Interest (MOI) trajectory, when a length of the movement trajectory exceeds a predetermined threshold value; and selecting an MOI zone-based determining method of determining a movement direction on the basis of an MOI zone, when the length of the movement trajectory does not exceed the predetermined threshold value. In [0041], the learning model application to determine object trajectory based on a predetermined threshold value. “A trajectory template 140 or 142 may represent a classification of intent of future movement ( e.g., predicted direction of future travel, class of motion, etc.) of the associated object 108(1) or 108(2). In [0047], “…the trajectory templates 140(1) and 142(2) may be derived based on cluster assignments ( e.g., assignment of data points to clusters). In some examples, the trajectory templates 140(1) and 142(2) may be determined based on less than a threshold change to a centroid of data points between iterations of applying the clustering algorithm. As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine Jeong with the prediction modeling methods disclosed in Morales with reasonable expectation of success. The motivation for doing so would have been to improve traffic forecasting and autonomous vehicle navigation, see Morales [Abstract]. Regarding Claim 2, Jeong discloses the following limitation dependent from Claim 1: wherein detecting the object includes extracting the plurality of frames among the frames, in consideration of a number of frames per second of the road image. See [0059], “In the above vehicle tracking step, the vehicle is tracked based on the vehicle object information detected by the vehicle object detection unit (1100) for the current video image frame. Tracking is the process of recording information about the location of a vehicle for each video image frame or sampling timing while assigning the same unique identification information to the same vehicle.” Regarding Claim 4, Jeong discloses the following limitation dependent from Claim 1: wherein detecting the objects further comprises outputting a type of vehicle of the detected object. See [0056], “vehicle object detection model (1400) like this can provide detection information for objects identified as vehicles in an input image … include a bounding box for an area of the vehicle and a type of the vehicle.” Regarding Claim 5, Jeong discloses the following limitation dependent from Claim 1: wherein tracking the objects includes tracking the objects, on the basis of a detected location of each object that is continuously detected in each of the plurality of frames. See [0059], “Tracking is the process of recording information about the location of a vehicle for each video image frame or sampling timing.” Regarding Claim 6, Jeong discloses the following limitation dependent from Claim 1: wherein tracking the objects includes setting an observation zone on the basis of a zone where the road exists in the plurality of frames. See [0036], “present invention monitors traffic volume by considering the entry and exit directions of each vehicle in a traffic volume monitoring target area (observation zone) with three or more directions, such as an intersection, rather than the number of vehicles passing in each direction on a single road.” Regarding Claim 7, Jeong discloses the following limitation dependent from Claim 6: wherein tracking the objects tracks each object existing in the observation zone. See [0058], “The vehicle tracking step (S200) performed by the vehicle tracking unit (1200) assigns unique identification information to each of the vehicles detected in the vehicle object detection step, performs tracking on a plurality of video image frames, and derives vehicle tracking information including unique identification information and coordinate information of each vehicle according to the time of each video image frame.” Regarding Claim 10, Jeong discloses a traffic monitoring method, but does not explicitly disclose MOI methods. However, Morales teaches the following limitations dependent from Claim 1: wherein the MOI trajectory-based determining method includes: determining a plurality of difference values between each movement trajectory and a plurality of predetermined MOI trajectories, according to the MOI trajectory-based determining method; and In [0071], “In some examples, the training component 234 may compare the ground truth (e.g., action performed) against the trajectory template and/or predicted trajectory. Based on the comparison, the training component 234 may be configured to train the machine learned component 232 to output accurate trajectory templates and/or predicted trajectories, which may be provided to the planning component 236 for controlling the vehicle 202.” determining an MOI trajectory having the smallest difference value from the movement trajectory, among the plurality of difference values, and determining a movement direction corresponding to the MOI trajectory as the movement direction. In [0129], ” In some examples, the vehicle trajectory(ies) based on at least one of the trajectory template or the predicted trajectory may represent a safer and/or smoother vehicle trajectory compared to a vehicle trajectory generated without the at least one of the trajectory template or the predicted trajectory, as the planning component 236 generating the trajectory may anticipate more closely the actions of entities proximate to the vehicle.” As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine Jeong with applying prediction models disclosed in Morales with reasonable expectation of success. The motivation for doing so would have been to improve traffic forecasting and autonomous vehicle navigation, see Morales [Abstract]. Regarding Claim 11, Jeong discloses a traffic monitoring method, but does not explicitly disclose MOI methods. However, Morales teaches the following limitations dependent from Claim 1: wherein the MOI-zone based determining method includes: determining which MOI zone the object is located among a plurality of predetermined MOI zones, according to the MOI zone-based determination method; In [0064], ”In some examples, the image generation component 230 may generate an image representing an area around the vehicle 202. In some examples, the area can be based at least in part on an area visible to sensors (e.g., a sensor range), a receding horizon, an area associated with an action (e.g., traversing through an intersection), and the like. In at least one example, the image may represent a 100 meter x 100 meter area around the vehicle 202, although any area is contemplated. The image generation component 230 may receive data about objects in the environment from the perception component 222 and may receive data about the environment itself from the localization component 220, the perception component 222, and the one or more maps 224.” determining a movement direction corresponding to the MOI zone where the object is located as the movement direction. In [0069], “ In some examples, trajectory template output by the machine learned component 232 may represent a classification of intent of future movement ( e.g., predicted direction of future travel, class of motion, etc.) of the object. The classification of intent may include a rough estimate of future motion of the object, such as whether the object will continue forward, stop, turn left or right, etc. In some examples, the trajectory template may be determined independent of map data provided by the one or more maps 224.” As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine Jeong with applying prediction models disclosed in Morales with reasonable expectation of success. The motivation for doing so would have been to improve traffic forecasting and autonomous vehicle navigation, see Morales [Abstract]. Regarding Claim 13, Jeong discloses the following limitations dependent from Claim 1: further comprising: determining traffic information using the number of vehicles for each movement direction; In [0043], they disclose traffic monitoring volume data results. providing the traffic information to the traffic information management server that performs the traffic information provision service. In [0041], they disclose using a traffic monitoring system. Regarding Claim 17, Jeong discloses the following limitations dependent from Claim 15: wherein the processor is configured to detect the object for each vehicle and a type of the object for each vehicle, using an object detection model that takes the plurality of frames as an input and takes the object for the vehicle and the type of the object for the vehicle as an output. Jeong discloses the object detection model in Claim 3, but also further discloses the limitation in [0097] where they apply it as an output for the vehicle feature information extraction model. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN KEITH PALMARCHUK whose telephone number is (571)272-6261. The examiner can normally be reached M-F 7 AM - 5 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.K.P./Examiner, Art Unit 3669 /Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §101, §103
Aug 19, 2025
Response Filed
Sep 17, 2025
Final Rejection — §101, §103
Dec 22, 2025
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §101, §103
Mar 24, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601854
WEATHER DETECTION FOR A VEHICLE ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12589677
METHOD FOR OPERATING AN ADJUSTMENT SYSTEM FOR AN INTERIOR OF A MOTOR VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12522180
WIPER WASHER CONTROL APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12427833
METHOD AND SYSTEM FOR OPERATING IN-VEHICLE AIR CONDITIONER
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+28.6%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month