DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of claims 1-5, 12, 14, 15, and 20 in the reply filed on 10/28/2025 is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5, 12, 14, 15, 20-22, 25-27, 29 are rejected under 35 U.S.C. 103 as being unpatentable over Langlinais (US PG Publication 2006/0269105) in view of Asao (US PG Publication 2021/0406083) and Wong (US PG Publication 2023/0237816).
Regarding Claim 1, Langlinais (US PG Publication 2006/0269105) discloses a method of automatically detecting a school bus stop-arm violation (obtain a video of a stop arm violation [0079), comprising:
capturing videos (video [0079]; motion image capture [0074]) of a vehicle (vehicle 21 traveling through viewing zone 23 [0074]) using a plurality of cameras (at least two image capture devices 110 [0075]) of a camera hub (capture apparatus 100 [0075]) coupled to an exterior side of a school bus (side of bus 15 [0076]) while the school bus is stopped and at least one stop-arm of the school bus is extended (stop arm 16 in the viewing zone 23 [0076]);
… a control unit (software and hardware carry out the operation desired of the present invention [0103]) communicatively coupled to the camera hub (Communication connections between any of image capture device 110, storage device 115, processor 120 and switch 19, some of which are shown as connections 201 [0101]) to detect the vehicle (a sensing mechanism sensing an approaching vehicle [0093]) as the vehicle passes the school bus (A vehicle moving toward the image capture device will appear to enter at the top of the field of view and exit at the bottom of the field of view, whereas, a vehicle moving away from the image capture device will appear to enter the bottom of the field of view and exit at the top of the field of view [0087]) while the school bus is stopped and the at least one stop-arm is extended (upon opening of the bus door and engaging stop arm 16 [0107]);
using the tracked vehicle [] (the vehicle enters the field of view and exits the field of view [0086]-[0088]) to identify video portions corresponding to the vehicle (a vehicle is captured and identified on a certain road at a first point, and then captured and identified on that same road at a second point [0139]) during a violation interval associated with the extended stop-arm (obtain video of stop arm violation, series of images in rapid succession of the moving vehicle, movie or video [0079]);
automatically recognizing a license plate number of a license plate of the vehicle (dentification of license plate [0103]) from the identified video portions (video of stop arm violation [0079]) using an [] license plate recognition (capture and identification of license plate [0103]) … running on the control unit (software and hardware carry out the operation desired of the present invention [0103]);
and generating, using the control unit, an evidence package (obtain not only the vehicle license plate, but also to include an image of vehicle to further aid identification [0087]) based on the tracked vehicle trajectory (video of stop arm violation [0079]), the evidence package comprising the identified video portions captured by the plurality of cameras (include an image of the vehicle [0087]) and the license plate number (optical character recognition and other data processing on the captured license plate images [0103]; Automatic License Plate Recognition (ALPR) [0012]) of the vehicle (license plate [0087]).
Langlinais does not disclose, but Asao (US PG Publication 2021/0406083) teaches inputting the videos (detection of a detection target object from an image [0051]) to a vehicle detection deep learning model (classifier, the object detecting unit 33 may use, for example, a "deep neural network" having a convolutional neural network (hereafter simply "CNN") architecture, such as a Single Shot MultiBox Detector or a Faster R-CNN [0049]) and to a vehicle tracker (object detecting unit 33 outputs information indicating the position and area of the object region in the image representing the detection target object and the type of the object included in the object region [0051]) running on a control unit (vehicle control unit 34 tracks objects detected by the object detecting unit 33 [0053])… and generate a tracked vehicle trajectory across the plurality of cameras (trajectories of objects tracked by the difference cameras, combine trajectories that overlap as the trajectory of the same object [0055]);
using the tracked vehicle trajectory to identify video portions (past images [0054]) corresponding to the vehicle during a [] interval (past images used in the optical flow algorithm for tracking the object [0054]).
Langlinais does not disclose, but Wong (US PG Publication 2023/0237816) teaches an automated license plate recognition deep learning model (deep neural network to generate a sequence of characters corresponding to the text associated [0014] vehicle license plate [0015]).
One of ordinary skill in the art before the application was filed would have been motivated to detect the vehicles in the system of Langlinais using the deep neural network of Asao because Asao suggests that it can shorten the time to execute detection, without reducing accuracy [0050], improving detection.
One of ordinary skill in the art before the application was filed would have been motivated to replace the license plate recognition of Langlinais with the license plate recognition of Wong because Wong teaches that the newer deep learning model produces accurate results and does not require human intervention, improving on prior art license plate detection ([0009]-[0011]).
Regarding Claim 2, Langlinais (US PG Publication 2006/0269105) discloses the method of claim 1.
Langlinais does not disclose, but Asao (US PG Publication 2021/0406083) teaches wherein tracking the vehicle further comprises:
generating tracklets (tracks the objects [0053]) of the vehicle detected from one or more videos (time series images [00053]) captured by each of the cameras (cameras 2-1 to 2-n [0053]), wherein each of the tracklets is a sequence of image coordinates (optical flow [0054]; image coordinates of object [0055]) of the vehicle detected from the one or more videos (objects detected in time-series images from cameras 2-1 to 2-n [0053]);
and generating a full-scene track (combine trajectories of the same object [0055]) of the vehicle across the plurality of cameras (trajectories of the objects tracked by the different cameras [0055]) using the image coordinates from the tracklets (transforming the image coordinate of the object into a coordinate in an aerial image [0055]).
One of ordinary skill in the art before the application was filed would have been motivated to detect the vehicles in the system of Langlinais using the deep neural network of Asao because Asao suggests that it can shorten the time to execute detection, without reducing accuracy [0050], improving detection.
Regarding Claim 3, Langlinais (US PG Publication 2006/0269105) discloses the method of claim 2, … coordinates of the license plate of the vehicle (field of view portion containing only the license plate [0085]).
Langlinais does not disclose, but Asao (US PG Publication 2021/0406083) teaches wherein generating the full-scene track further comprises estimating image coordinates … across multiple videos (transforming the image coordinate of the object into a coordinate in an aerial image … combine trajectories of the same object [0055]).
One of ordinary skill in the art before the application was filed would have been motivated to detect the vehicles in the system of Langlinais using the deep neural network of Asao because Asao suggests that it can shorten the time to execute detection, without reducing accuracy [0050], improving detection.
Regarding Claim 5, Langlinais (US PG Publication 2006/0269105) discloses the method of claim 1.
Langlinais does not disclose, but Wong (US PG Publication 2023/0237816) teaches wherein automatically recognizing the license plate number (generating output data 560 [0098], Fig. 5, of license plate, Fig. 4) further comprises:
obtaining predictions from the ALPR deep learning model (deep neural network to generate a sequence of characters corresponding to the text associated [0014] vehicle license plate [0015]) concerning license plate numbers (vehicle license plate [0015]) and confidence values (confidence level associated with predictions [0105]) associated with the predictions (each position can represent a predicted character or a blank space [0100], Figs. 5A-5B);
and selecting one license plate number based on the predictions and the confidence values (initial predictions each are associated with a respective likelihood for the initially-predicted character to be in the location, and wherein the respective likelihoods form a probability vector [0143]).
One of ordinary skill in the art before the application was filed would have been motivated to replace the license plate recognition of Langlinais with the license plate recognition of Wong because Wong teaches that the newer deep learning model produces accurate results and does not require human intervention, improving on prior art license plate detection ([0009]-[0011]).
Regarding Claim 12, Langlinais (US PG Publication 2006/0269105) discloses the method of claim 2.
Langlinais does not disclose, but Asao (US PG Publication 2021/0406083) teaches further comprising generating the full-scene track of the vehicle while the school bus is in motion (During travel of the vehicle 10, the processor 24 executes the vehicle control process, based on an image received from one of the cameras 2-1 to 2-n at each predetermined timing [0031]).
One of ordinary skill in the art before the application was filed would have been motivated to detect the vehicles in the system of Langlinais using the deep neural network of Asao because Asao suggests that it can shorten the time to execute detection, without reducing accuracy [0050], improving detection.
Regarding Claim 14, Langlinais (US PG Publication 2006/0269105) discloses the method of claim 1, further comprising dynamically adjusting an exposure and gain of a video frame of one of the videos by estimating a location of the license plate of the vehicle (while the license plate is in a license plate portion of the field of view, an exposure for the device based on a meter reading that places greater weight the license plate portion of the field [0050]).
Regarding Claim 15, Langlinais (US PG Publication 2006/0269105) discloses a system for automatically detecting a school bus stop-arm violation, comprising:
a camera hub (capture apparatus 100 [0075]) configured to be coupled to an exterior side of a school bus (side of bus 15 [0076]) …;
wherein the control unit comprises one or more processors programmed to execute instructions (software and hardware carry out the operation desired of the present invention [0103]).
Langlinais does not disclose, but Leonard (US PG Publication 2009/0195651) teaches camera … in between two immediately adjacent windows … (cameras are mounted 37 on the left external side of said school bus approximately one-fifth of the distance from the front of said school bus approximately under the school bus driver's and first passenger's windows [0028]).
The remainder of Claim 15 is rejected on the grounds provided in Claim 1.
One of ordinary skill in the art before the application was filed would have found the placement of the cameras obvious because the placement of the cameras between windows does not affect the operation of the camera system. See MPEP 2144.04 citing In re Japikse, 181 F.2d 1019, 86 USPQ 70 (CCPA 1950).
Regarding Claim 20, Langlinais (US PG Publication 2006/0269105) discloses one or more non-transitory computer-readable media comprising instructions stored thereon, that when executed by one or more processors, perform steps (software and hardware carry out the operation desired of the present invention [0103]).
The remainder of Claim 15 is rejected on the grounds provided in Claim 1.
Regarding Claim 21, the claim is rejected on the grounds provided in Claim 2.
Regarding Claim 22, the claim is rejected on the grounds provided in Claim 3.
Regarding Claim 24, the claim is rejected on the grounds provided in Claim 5.
Regarding Claim 25, the claim is rejected on the grounds provided in Claim 12.
Regarding Claim 26, the claim is rejected on the grounds provided in Claim 2.
Regarding Claim 27, the claim is rejected on the grounds provided in Claim 3.
Regarding Claim 29, the claim is rejected on the grounds provided in Claim 5.
Claims 4, 23, 28 are rejected under 35 U.S.C. 103 as being unpatentable over Langlinais (US PG Publication 2006/0269105) in view of Asao (US PG Publication 2021/0406083), Wong (US PG Publication 2023/0237816), and Chen (US 2015/0338515 A1).
Regarding Claim 4, Langlinais (US PG Publication 2006/0269105) discloses the method of claim 3.
Langlinais does not disclose, but Asao (US PG Publication 2021/0406083) teaches wherein generating the full-scene track (transforming the image coordinate of the object into a coordinate in an aerial image …) further comprises associating the image coordinates from at least one of the tracklets with one or more of the other tracklets (combine trajectories of the same object from the different cameras [0055]) using a [] transform algorithm (transforming image coordinates [0055]).
Langlinais does not disclose, but Chen (US 2015/0338515 A1) teaches homography transform (a track of the vehicle detected by the sensors is generated which is a result of the homography transformation [0032]).
One of ordinary skill in the art before the application was filed would have been motivated to detect the vehicles in the system of Langlinais using the deep neural network of Asao because Asao suggests that it can shorten the time to execute detection, without reducing accuracy [0050], improving detection.
One of ordinary skill in the art before the application was filed would have been motivated to implement the transform of Asao using the homography transform, as in Chen, because Chen teaches that the homography transform automatically eliminates bias error in the trajectory [0032], resulting in improved trajectory tracking.
Regarding Claim 23, the claim is rejected on the grounds provided in Claim 4.
Regarding Claim 28, the claim is rejected on the grounds provided in Claim 4.
Response to Arguments
Applicant’s remarks filed 2/6/2026 have been considered but are unpersuasive.
Applicant argues that Asao does not use the tracking data to constrain downstream processing, determine which video segments are submitted to a license plate recognition module, or select evidentiary video portions. Remarks at 9-10. These arguments are not persuasive against the combination of references because the office action relies on Langlinais to teach them and the prima facie case of obviousness does not require Asao to also teach them.
Applicant argues that Langlinais does not disclose trajectory-based video selection or evidence generation, or license plate detection. Remarks at 10. This is unpersuasive because Langlinais will “obtain a video of stop arm violation,” Langlinais at [0079], and that the vehicle will appear to move across the field of view of the camera based on the direction of travel relative to the camera, Langlinais at [0086]-[0088], and having the license plate “in the field of view,” Langlinais at [0090], and the vehicle and the license plate can both be identified from the images [0139], [0012]. These citations together indicate that Langlinais is able to detect a vehicle in the field of view of the camera over a series of images spanning the stop arm violation. This is trajectory-based video selection.
Applicant also argues that the office has not articulated a reason why a person of ordinary skill would modify Langlinais to “repurpose generic object tracking into a trajectory-driven evidentiary gathering mechanism.” Remarks at 10. This is unpersuasive because the office action cites a teaching from Asao that informs those of ordinary skill of the improvements of the combination. Because Asao combines tracks from multiple cameras and Langlinais has multiple cameras, the accumulation of video streams containing the same tracked object provides more information than one camera alone, improving the review of the violation. This is not “repurposing,” as Applicant alleges, but improving.
Last, Applicant’s arguments against the rejection of Claims 2-4 are vague and unresponsive. The claim limitations are mapped to the reference and Applicant has not specified why the citations fail to teach the limitations to which they are mapped. Remarks at 11.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20160144788 A1 – school bus stop violations
US 20180137753 A1 – school bus stop violations
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHADAN E HAGHANI whose telephone number is (571)270-5631. The examiner can normally be reached M-F 9AM - 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHADAN E HAGHANI/Examiner, Art Unit 2485