Prosecution Insights
Last updated: April 19, 2026
Application No. 18/522,004

DRIVING VIDEO RECORDING SYSTEM AND A CONTROLLING METHOD OF THE SAME AND A MANUFACTURING METHOD OF THE SAME

Non-Final OA §102
Filed
Nov 28, 2023
Examiner
NGUYEN, ALLEN H
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Sogang University Research & Business Development Foundation
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
471 granted / 558 resolved
+22.4% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
12 currently pending
Career history
570
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
26.8%
-13.2% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 558 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by SICCONI et al. US Patent Application No. 2023/0112797 (hereinafter SICCONI). Regarding claim 1, SICCONI et al. (US 2023/0112797) discloses a driving video recording system (A system based on vehicle operator attentiveness; Figures 1-3) comprising: a camera module (Camera module unit 308, Figure 3) for monitoring surroundings of a vehicle (Camera module unit 308 may be mounted next to the rearview mirror to provide best view of an operator's face while minimizing interference with road view; paragraph 74, Figure 3. Also, paragraph 75, Figure 4 indicating that monitoring driving conditions may include capturing a video feed of conditions external to the vehicle); a first memory for storing a video transmitted from the camera module; a second memory for storing a computer program for controlling storage of the video (Comparing a first frame of the video feed to a second from of the video feed and determining that a number of pixels exceeding a threshold amount has changed with respect to at least a parameter from the first frame (first memory) to the second frame (second memory); paragraph 34); and Note: a road-facing camera 116 may include detecting external objects and generating a driving condition datum as a function of the detected external objects. Therefore, a video feed would be stored in frames (frame considered as memory). a controller including a processor electrically and communicatively connected to the camera, the first memory and the second memory and configured to execute the computer program (A processing unit 312 to analyze and process video streams from the two cameras, and to communicate 316 with a mobile application on a phone 230 or other processing device; paragraph 74), wherein the computer program includes a contamination classification deep- learning network model, and the processor is further configured to determine whether video data obtained by the camera module includes contamination data through the deep- learning network model by executing the computer program (Generating the attentiveness level may further include extracting historical operator data and historical driving condition data, generating a machine-learning model as a function of the historical operator data and the historical driving condition data, and making a risk determination using the machine-learning model; paragraphs 34-35, 49, 77). Regarding claim 2, SICCONI discloses the driving video recording system of claim 1, wherein the processor is further configured to extract a feature value from the video data through the deep-learning network model and determine whether the video data includes the contamination data by comparing the feature value with a set threshold value (Computing device which extracts information from the environment internal to a vehicle and non-limiting, a neural network or machine-learning process may be implemented for computing attentiveness level; paragraphs 34, 53-56). Regarding claim 3, SICCONI discloses the driving video recording system of claim 2, wherein the processor is further configured to extract a feature for image data of a single frame of the video data for the feature value (Detecting a parameter change may include comparing a first frame of a video feed to a second frame of the video feed, and determining that a threshold number of pixels has changed with respect to at least a parameter from the first frame to the second frame; paragraph 34). Regarding claim 4, SICCONI discloses the driving video recording system of claim 1, wherein the processor is further configured to determine a classification for the contamination data among predetermined contamination type classifications through the deep-learning network model when the processor concludes that the video data includes the contamination data (Selection of frame rate may be determined using a machine-learning process; for instance, where object analysis and/or classification has been performed to identify objects in similar video feeds, motion of such objects and rates of pixel parameter changes in video feeds may be correlated in training data derived from such video feeds, and used in any machine-learning, deep learning, and/or neural network process; paragraphs 34-35). Regarding claim 5, SICCONI discloses the driving video recording system of claim 4, wherein the contamination type classifications includes at least one of a dust, a soil, an ice, or a water droplet (An operator-facing camera 140 which extracts information from the environment internal to a vehicle, or any combination. Such as attention state module 144 may receive data indicating heavy rainfall; paragraph 53). Regarding claim 6, SICCONI discloses the driving video recording system of claim 4, wherein the deep-learning network model has been trained by classification training with training data for each contamination type (Motion of such objects and rates of pixel parameter changes in video feeds may be correlated in training data derived from such video feeds, and used in any machine-learning, deep learning, and/or neural network process to identify rates of pixel parameter change consistent with motion of classified objects; paragraphs 34-35). Regarding claim 7, SICCONI discloses the driving video recording system of claim 6, wherein the deep-learning network model has been trained by distribution-based separation training with non- contamination training data after the classification. Regarding claim 8, SICCONI discloses the driving video recording system of claim 7, wherein the distribution-based separation training includes: extracting a plurality of first feature values for the contamination training data through the deep-learning network model (Monitoring information extracted by the least a driving condition sensor 108 may be compiled into at least an operator state datum to give the overall state of a current operator, which may include state of alertness and/or distractibility; paragraph 44); extracting a plurality of second feature values for the non-contamination training data (A datum extracted from a vehicle may include, as non-limiting examples, vehicular data such as vehicular identification number (VIN), odometer readings, measures of rotations per minute (RPM), engine load, miles per gallon; at least paragraphs 47, 53); and determining a threshold value based on the plurality of first feature value distributions and the plurality of second feature value distributions (Detecting external objects may further include comparing a first frame of the video feed to a second from of the video feed and determining that a number of pixels exceeding a threshold amount has changed with respect to at least a parameter from the first frame to the second frame; paragraph 34). Regarding claim 9, SICCONI discloses a control method of a driving video recording system (A system based on vehicle operator attentiveness; Figures 1-3) including a camera module (Camera module unit 308, Figure 3) for monitoring surroundings of a vehicle (Camera module unit 308 may be mounted next to the rearview mirror to provide best view of an operator's face while minimizing interference with road view; paragraph 74, Figure 3. Also, paragraph 75, Figure 4 indicating that monitoring driving conditions may include capturing a video feed of conditions external to the vehicle), a first memory for storing a video transmitted from the camera module, a second memory for storing a computer program for controlling storage of the video (Comparing a first frame of the video feed to a second from of the video feed and determining that a number of pixels exceeding a threshold amount has changed with respect to at least a parameter from the first frame (first memory) to the second frame (second memory); paragraph 34), and a controller including a processor electrically and communicatively connected to the camera (A processing unit 312 to analyze and process video streams from the two cameras, and to communicate 316 with a mobile application on a phone 230 or other processing device; paragraph 74), the first memory and the second memory and configured for executing the computer program, wherein the computer program includes a contamination classification deep-learning network model, the control method comprising (Generating the attentiveness level may include extracting historical operator data and historical driving condition data, generating a machine-learning model as a function of the historical operator data and the historical driving condition data, and making a risk determination using the machine-learning model; paragraphs 34-35, 49, 77): receiving, by the processor, video data from the camera module (A processing unit 312 to analyze and process video streams from the cameras, and to communicate 316 with a mobile application on a phone 230 or other processing device; paragraph 74); and determining, by the processor, whether the video data includes contamination data through the deep-learning network model by executing the computer program (Detecting external objects and generating at least a driving condition datum as a function of the detected external objects. External objects include things present in the environment external to a vehicle, weather conditions…, and used in any machine-learning, deep learning, and/or neural network process to identify rates of pixel parameter change consistent with motion of classified objects; paragraphs 34-35). Note: a road-facing camera 116 may include detecting external objects and generating a driving condition datum as a function of the detected external objects. Therefore, a video feed would be stored in frames (frame considered as memory). Regarding claim 10, SICCONI discloses the control method of claim 9, wherein the determining of whether the video data includes the contamination data includes: extracting a feature value from the video data through the deep training network model (computing device which extracts information from the environment internal to a vehicle, or any combination of such devices and non-limiting, a neural network or machine-learning process may be implemented for computing attentiveness level; paragraph 53); and comparing the feature value with a set threshold value to determine whether the video data includes the contamination data (Attention state module 144 may alternatively or additionally compare attentiveness level to permissible thresholds, which may include thresholds corresponding to duration, frequency, and/or other patterns, compatible with operator attentiveness computed from a driving context; paragraphs 54-56). Regarding claim 11, SICCONI discloses the control method of claim 10, wherein the extracting of the feature value includes extracting a feature for image data of a single frame of the video data (Detecting a parameter change may include comparing a first frame of a video feed to a second frame of the video feed, and determining that a threshold number of pixels has changed with respect to at least a parameter from the first frame to the second frame; paragraph 49). Regarding claim 12, SICCONI discloses the control method of claim 9, further including determining a classification for the contamination data among predetermined contamination type classification through the deep-learning network model when the processor concludes that the video data includes the contamination data (Computing device which extracts information from the environment internal to a vehicle, or any combination of such devices, and non-limiting, a neural network or machine-learning process may be implemented for computing attentiveness level 148. A neural network, may be used to analyze external extracted parameters to determine if operator is attentive; paragraph 53). Regarding claim 13, SICCONI discloses the control method of claim 12, wherein the contamination type classifications includes at least one of a dust, a soil, an ice, or a water droplet (An operator-facing camera 140 which extracts information from the environment internal to a vehicle, or any combination. Such as attention state module 144 may receive data indicating heavy rainfall; paragraph 53). Regarding claim 14, SICCONI discloses the control method of claim 12, wherein the deep-learning network model has been trained by classification training with training data for each contamination type (Motion of such objects and rates of pixel parameter changes in video feeds may be correlated in training data derived from such video feeds, and used in any machine-learning, deep learning, and/or neural network process to identify rates of pixel parameter change consistent with motion of classified objects; paragraphs 34-35). Regarding claim 15, SICCONI discloses the control method of claim 13, wherein the deep-learning network model has been trained by distribution-based separation training with non-contamination training data after the classification (A datum extracted from a vehicle may include, as non-limiting examples, vehicular data such as vehicular identification number (VIN), odometer readings, measures of rotations per minute (RPM), engine load, miles per gallon; at least paragraphs 47, 53). Regarding claim 16, SICCONI discloses the control method of claim 15, wherein the distribution-based separation training includes: extracting a plurality of first feature values for the contamination training data through the deep-learning network model (Monitoring information extracted by the least a driving condition sensor 108 may be compiled into at least an operator state datum to give the overall state of a current operator, which may include state of alertness and/or distractibility; paragraph 44); extracting a plurality of second feature values for the non-contamination training data (A datum extracted from a vehicle may include, as non-limiting examples, vehicular data such as vehicular identification number (VIN), odometer readings, measures of rotations per minute (RPM), engine load, miles per gallon; at least paragraphs 47, 53); and determining a threshold value based on the plurality of first feature value distributions and the plurality of second feature value distributions (Detecting external objects may further include comparing a first frame of the video feed to a second from of the video feed and determining that a number of pixels exceeding a threshold amount has changed with respect to at least a parameter from the first frame to the second frame; paragraph 34). Regarding claim 17, SICCONI discloses a method for manufacturing a driving video recording system (A system for using artificial intelligence to present geographically relevant user-specific recommendations based on vehicle operator attentiveness; Figures 1-3) including a camera module (Camera module unit 308, Figure 3) for monitoring surroundings of a vehicle (paragraph 75, Figure 4 indicating that monitoring driving conditions may include capturing a video feed of conditions external to the vehicle) a first memory for storing a video transmitted from the camera module, a second memory for storing a computer program for controlling storage of the video (a first frame of the video feed to a second from of the video feed and determining that a number of pixels exceeding a threshold amount has changed with respect to at least a parameter from the first frame (first memory) to the second frame (second memory); paragraph 34) and including a contamination classification deep-learning network model (Computing device which extracts information from the environment internal to a vehicle, or any combination of such devices, and non-limiting, a neural network or machine-learning process may be implemented for computing attentiveness level; paragraph 53), and a controller including a processor electrically and communicatively connected to the camera (A processing unit 312 to analyze and process video streams from the two cameras, and to communicate 316 with a mobile application on a phone 230 or other processing device; paragraph 74), the first memory and the second memory and configured for executing the computer program, the method comprising: training the deep-learning network model by classification training with training data for each contamination type (Comparing a first frame of the video feed to a second from of the video feed and determining that a number of pixels exceeding a threshold amount has changed with respect to at least a parameter from the first frame to the second frame. A road-facing camera 116 may include a multitude of parameters such as weather conditions, traffic information, proximity to surrounding cars; paragraphs 34-37, 42). Note: a road-facing camera 116 may include detecting external objects and generating a driving condition datum as a function of the detected external objects. Therefore, a video feed would be stored in frames (frame considered as memory). Regarding claim 18, SICCONI discloses the method of claim 17, further including training the deep-learning network model by distribution-based separation training with non-contamination training data after the classification training (A datum extracted from a vehicle may include, as non-limiting examples, vehicular data such as vehicular identification number (VIN), odometer readings, measures of rotations per minute (RPM), engine load, miles per gallon; at least paragraphs 47, 53). Regarding claim 19, SICCONI discloses the method of claim 18, wherein the distribution-based separation training includes: extracting a plurality of first feature values for the contamination training data through the deep-learning network model (Monitoring information extracted by the least a driving condition sensor 108 may be compiled into at least an operator state datum to give the overall state of a current operator, which may include state of alertness and/or distractibility; paragraph 44); extracting a plurality of second feature values for the non-contamination training data (A datum extracted from a vehicle may include, as non-limiting examples, vehicular data such as vehicular identification number (VIN), odometer readings, measures of rotations per minute (RPM), engine load, miles per gallon; at least paragraphs 47, 53); and determining a threshold value based on the plurality of first feature value distributions and the plurality of second feature value distributions (A datum extracted from a vehicle may include, as non-limiting examples, vehicular data such as vehicular identification number (VIN), odometer readings, measures of rotations per minute (RPM), engine load, miles per gallon; at least paragraphs 47, 53). Regarding claim 20, SICCONI discloses the method of claim 17, wherein the classification for each contamination type includes at least one of a dust, a soil, an ice, or a water droplet (An operator-facing camera 140 which extracts information from the environment internal to a vehicle, or any combination. Such as attention state module 144 may receive data indicating heavy rainfall; paragraph 53). Information Disclosure Statement 6. The information disclosure statement (IDS) submitted on 11/28/2023 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Cited Art 7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN H NGUYEN whose telephone number is (571)270-1229. The examiner can normally be reached M-F 7 am-4 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDERRAHIM MEROUAN can be reached at (571) 270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN H NGUYEN/Primary Examiner, Art Unit 2683
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603962
INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12596508
INFORMATION PROCESSING SYSTEM AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12591956
NOISE REDUCTION AND FEATURE ENHANCEMENT FOR A THERMAL IMAGING CAMERA
2y 5m to grant Granted Mar 31, 2026
Patent 12586188
METHOD AND DEVICE FOR GENERATING A THREE-DIMENSIONAL SYNTHETIC IMAGE FROM A THREE-DIMENSIONAL INPUT IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12553711
REFLECTION REFUTING LASER SCANNER
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
97%
With Interview (+12.8%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 558 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month