DETAILED ACTION
Response to Amendment
This office action regarding application number 18/031,027, filed April 10, 2023, is in response to the applicants arguments and amendments filed May 29, 2025. Claims 1 and 8-9 have been amended. Claim 4 has been cancelled. New claim 13 has been added. Claims 1-3 and 5-13 are currently pending and are addressed below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
The applicants arguments and amendments to the application have overcome some of the objections and rejections previously set forth in the Non-Final action mailed January 29, 2025. Applicants amendments to the specification have been deemed sufficient to overcome the previous objection, therefore the objections are withdrawn. Claim 4 has been cancelled and therefore all associated objections and rejections are withdrawn. Applicants amendments to claim 1 have been deemed sufficient to overcome the previous 35 USC 103 rejections through the inclusion of “and providing 360 recording of third party vehicles and road conditions surrounding the exterior of the first vehicle … and wherein when the logic module determines that the one or more events of interest comprise one or more traffic violations, the logic module ascertains a pre-determined and post-determined time frame of the one or more traffic violations and blends the pre-determined and post-determined time frame with the one or more streams of video information to create a video of the one or more traffic violations” therefore the rejections are withdrawn. However as this changes the scope of the claims, new art rejections have been made based on the changes in scope. New Rejections for claim 13 have also been added.
Applicant’s arguments with respect to claim(s) 1 and the Julian and Slavin references have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a medium for storing” in claim 1
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Regarding “a medium for storing” the specification recites the structure of “The cameras store the information in a memory” in Paragraph [0007], and “In one embodiment, the memory 5 comprises a hard drive. In one embodiment, the memory 5 comprises a solid state drive” in Paragraph [0039]. Therefore the medium for storing and storage device are interpreted as a memory drive for storing data.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1-3 and 5-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Julian (US-20170200061) in view of Alon (US-20110234749).
Regarding claim 1, Julian teaches a computerized system for detection of traffic interactions utilizing artificial intelligence comprising (Abstract, "Systems and methods provide, implement, and use using a computer-vision based methods of context-sensitive monitoring and characterization of driver behavior")
a first vehicle, the first vehicle comprising an interior and an exterior (Paragraph [0014], "The method generally includes receiving visual data from a camera at a first device; wherein the camera is affixed to a first vehicle")
camera mounted on the first vehicle, the camera facing away from the interior of the first vehicle, the plurality of cameras recording one or more streams of video information in digital format (Paragraph [0031], "In some embodiments, visual data captured at a camera affixed to a vehicle may be used as the basis for detecting a driving event")
a plurality of sensors mounted on the first vehicle the one or more sensors recording one or more streams of sensor information in digital format (Paragraph [0045], "The device 100 may include input sensors (which may include a forward facing camera 102, a driver facing camera 104, connections to other cameras that are not physically mounted to the device, inertial sensors 106, car OBD-II port sensor data (which may be obtained through a Bluetooth connection 108), and the like) and compute capability 110.")
a medium for storing data in digital format located in the interior of the first vehicle, the medium in communication with the plurality of cameras and the plurality of sensors (Paragraph [0045], "The device may further include memory storage 114")
a non-transitory storage device embodying one or more routines operable to (Paragraph [0013], "Certain aspects of the present disclosure provide a non-transitory computer-readable medium having program code recorded thereon")
detect objects using artificial neural networks (Paragraph [0033], "In some embodiments, bounding boxes for objects may be produced by a neural network that has been trained to detect and classify objects that are relevant to driving, such as traffic lights, traffic signs, and vehicles.")
the non-transitory storage device comprising a receiver module, a detector module, and a logic module (Paragraph [0048], "The system may include sensors 210, profiles 230, sensory recognition and monitoring modules 240, assessment modules 260") (See also Figure 2 showing a plurality of individual modules)
and a CPU in communication with the non-transitory storage device, the CPU operable to execute the one or more routines embodied in the non-transitory storage device (Paragraph [0011], "The apparatus generally includes a first memory unit; a second memory unit; a first at least one processor coupled to the first memory unit; and a second at least one processor coupled to the first memory unit")
wherein the one or more streams of video information comprise activities of third party vehicles and persons located on the exterior of the first vehicle and/or the road conditions surrounding the exterior of the first vehicle (Paragraph [0033], "In some embodiments, bounding boxes for objects may be produced by a neural network that has been trained to detect and classify objects that are relevant to driving, such as traffic lights, traffic signs, and vehicles.") (Paragraph [0065], “Likewise, a pedestrian may be detected based on visual data”)
wherein data in digital format that are stored in the medium for storing data are transmitted to the receiver module (See Figure 2 showing sensor data being transferred to sensory recognition and monitoring modules)
wherein the receiver module detects one or more images in the received data in digital format that comprises video information (Paragraph [0012], "The apparatus generally includes means for receiving visual data from a first camera at a first device") (Paragraph [0103], "The aforementioned driver monitoring systems may include a general assessment system 260 that may be based on a set of modules 240. A combination of modules may determine the car and environment status using a mixture of cameras 212, inertial sensors 214, GPS 222, cloud data 224, profile data 230, which may include vehicle 234 and driver profiles 232, and other inputs 210. These inputs may then be the basis of a plurality of inferences 240", here the system is receiving visual data including images at the modules which determine the status of the environment, the image data received from the sensor modules such as cameras, inertial sensors and GPS)
wherein the detector module selects one or more events of interest from the received data in digital format that comprises video information (Paragraph [0012], "means for detecting the event based at least in part on the visual data and a first inference engine") (Paragraph [0033], "Several means for detecting an event based on visual data are contemplated. In some embodiments, bounding boxes for objects may be produced by a neural network that has been trained to detect and classify objects that are relevant to driving, such as traffic lights, traffic signs, and vehicles," here the sensor modules are sending information to the evaluation modules which are used to detect events based on the visual data)
wherein the logic module determines if the one or more events of interest that were selected by the detector module comprise one or more traffic violations performed by the third party vehicles and persons (Paragraph [0012], "means for determining the descriptor of the event; and means for transmitting a first data comprising the descriptor from the first device to a second device") (Paragraph [0039], "In addition to detecting driving events that may not be otherwise detectable, visual information may be used to classify a behavior in a context-sensitive manner. Returning to the example of running a red-light, typically, running a red light may be considered a ‘bad’ driving behavior," the modules of the system then determine descriptors for the event to classify if the event was an actionable event such as violating traffic laws by running a red light)
and wherein when the logic module determines that the one or more events of interest comprise one or more traffic violations (Paragraph [0039], "In addition to detecting driving events that may not be otherwise detectable, visual information may be used to classify a behavior in a context-sensitive manner. Returning to the example of running a red-light, typically, running a red light may be considered a ‘bad’ driving behavior," the modules of the system then determine descriptors for the event to classify if the event was an actionable event such as violating traffic laws by running a red light).
However while Julian teaches the use of a plurality of cameras (Paragraph [0047], “For example, a device mounted to a truck may have a memory capacity to store three hours of high-definition video captured by inward and outward facing cameras”).
The system does not explicitly teach a plurality of cameras mounted on the first vehicle, the plurality of cameras facing away from the interior of the first vehicle, providing 360° recording of third party vehicles and road conditions surrounding the exterior of the first vehicle, wherein when the logic module determines that the one or more events of interest comprise one or more traffic violations, the logic module ascertains a pre-determined and post-determined time frame of the one or more traffic violations, and blends the pre-determined and post-determined time frame with the one or more streams of video information to create a video of the one or more traffic violations.
Alon teaches systems and methods for detecting and recording traffic law violations events using cameras on a law enforcement unit including
a plurality of cameras mounted on the first vehicle, the plurality of cameras facing away from the interior of the first vehicle (Paragraph [0011], “According to further features in preferred embodiments of the invention described below, the array of cameras includes: (i) at least 4 wide angled cameras positioned to view primary relative directions and provide a substantially 360.degree. field of view for detecting an object within the field of view”)
providing 360° recording of third party vehicles and road conditions surrounding the exterior of the first vehicle (Paragraph [0011], “According to further features in preferred embodiments of the invention described below, the array of cameras includes: (i) at least 4 wide angled cameras positioned to view primary relative directions and provide a substantially 360.degree. field of view for detecting an object within the field of view”)
wherein when the logic module determines that the one or more events of interest comprise one or more traffic violations (Paragraph [0024], “When a traffic law violation event is detected, all the recorded video, images of identification objects, ID objects images, and other relevant data is saved to document the detected traffic violation event,” here the system is determining if traffic violation events are occurring around the host vehicle)
the logic module ascertains a pre-determined and post-determined time frame of the one or more traffic violations (Paragraph [0021], “The recorded video of a traffic law violation event typically includes several seconds before the traffic law violation event and a few seconds after the conclusion of the traffic law violation event,” here the system is recording data for a time frame before/pre, during, and after/post a traffic violation event) (Paragraph [0024], “When a traffic law violation event is detected, all the recorded video, images of identification objects, ID objects images, and other relevant data is saved to document the detected traffic violation event including video that was kept in temporary memory, for future use, for example, for issuing and handling a citation or as evidence for the occurrence of traffic violation event.”) (Paragraph [0064], “System 100 continuously records video image frames from N wide cameras 50 into a temporary memory (step 230), keeping video back for several seconds, depending on the memory size and predefined buffer size selection,” here the system continuously records information and keeps a predefined buffer of time)
and blends the pre-determined and post-determined time frame with the one or more streams of video information to create a video of the one or more traffic violations (Paragraph [0021], “The recorded video of a traffic law violation event typically includes several seconds before the traffic law violation event and a few seconds after the conclusion of the traffic law violation event,” here the system is outputting a recorded video including the pre and post time frames of the event from the plurality of cameras).
Julian and Alon are analogous art as they are both generally related to monitoring vehicles and the surroundings of vehicles.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include a plurality of cameras mounted on the first vehicle, the plurality of cameras facing away from the interior of the first vehicle, providing 360° recording of third party vehicles and road conditions surrounding the exterior of the first vehicle, wherein when the logic module determines that the one or more events of interest comprise one or more traffic violations, the logic module ascertains a pre-determined and post-determined time frame of the one or more traffic violations, and blends the pre-determined and post-determined time frame with the one or more streams of video information to create a video of the one or more traffic violations of Alon in the system for detecting traffic events of Julian with a reasonable expectation of success in order to improve the safety of the roadway by monitoring 360 degrees around the vehicle and providing complete evidence of traffic violations to authorities (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”).
Regarding claim 2, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein one or more of the cameras comprise high resolution video cameras (Paragraph [0047], "For example, a device mounted to a truck may have a memory capacity to store three hours of high-definition video captured by inward and outward facing cameras.").
Regarding claim 3, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein the one or more sensors comprise radar, laser, LIDAR or combinations thereof (Paragraph [0060], "In another example, other sensors, such as RADAR, Ultrasound (SONAR), or LIDAR, may be used to determine the distance to the vehicle ahead. In addition, multiple methods may be combined to estimate the distance.").
Regarding claim 5, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein the one or more traffic violations comprise at least one of the group consisting of: driving with an expired registration, driving under the influence, one or more crimes, defective equipment, illegal equipment, road accidents, safety hazards, littering, mobile phone usage while operating a motor vehicle, illegal lane changing, illegal passing of a vehicle in motion, domestic violence, road-rage, following another vehicle at an unsafe distance, speeding, reckless driving, reckless endangerment, and combinations thereof (Paragraph [0087-0088], "The driver monitoring system may also capture a clip of violation events, or violation events above a given rating. … In one embodiment, the aforementioned driver monitoring system may be configured to report speed violations.") (Paragraph [0048], "Contemplated driver assessment modules include speed assessment 262, safe following distance 264, obeying traffic signs and lights 266, safe lane changes and lane position 268, hard accelerations including turns 270, responding to traffic officers, responding to road conditions 272, and responding to emergency vehicles.").
Regarding claim 6, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches transmitting the determined events to a remote location (Paragraph [0009], “The present disclosure also provides systems and methods for unsupervised learning of action values, monitoring of a driver's environment, and transmitting visual data and/or descriptors of visual data from a client to a server”).
However Julian does not explicitly teach wherein the video information identified to comprise one or more actionable events is communicated to a local authority.
Alon further teaches wherein the video information identified to comprise one or more actionable events is communicated to a local authority (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”) (Paragraph [0014], “According to still further features in the described preferred embodiments the system further includes (e) a reporting unit, which is operable to report the detected law violation, and can be either a local citation issuing unit or a remote citation issuing unit.”).
Julian and Alon are analogous art as they are both generally related to monitoring vehicles and the surroundings of vehicles.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include wherein the video information identified to comprise one or more actionable events is communicated to a local authority of Alon in the system for detecting traffic events of Julian with a reasonable expectation of success in order to improve the safety of the roadway by monitoring 360 degrees around the vehicle and providing complete evidence of traffic violations to authorities (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”).
Regarding claim 7, the combination of Julian and Alon teaches the system as discussed above in claim 1, however Julian does not explicitly teach wherein the video information and the sensor information are time stamped, wherein sensor information corresponding by time stamp to the video information that is identified to comprise one or more actionable events is communicated to the local authority.
Alon further teaches wherein the video information and the sensor information are time stamped (Paragraph [0013], “According to still further features in the described preferred embodiments the recording unit is further configured to record data, for use as evidentiary material such as current speed of the law enforcement unit, the geographical location, current date and current time.”)
wherein sensor information corresponding by time stamp to the video information that is identified to comprise one or more actionable events is communicated to the local authority (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”) (Paragraph [0014], “According to still further features in the described preferred embodiments the system further includes (e) a reporting unit, which is operable to report the detected law violation, and can be either a local citation issuing unit or a remote citation issuing unit.”).
Julian and Alon are analogous art as they are both generally related to monitoring vehicles and the surroundings of vehicles.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include wherein the video information and the sensor information are time stamped, wherein sensor information corresponding by time stamp to the video information that is identified to comprise one or more actionable events is communicated to the local authority of Alon in the system for detecting traffic events of Julian with a reasonable expectation of success in order to improve the safety of the roadway by monitoring 360 degrees around the vehicle and providing complete evidence of traffic violations to authorities (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”).
Regarding claim 8, the combination of Julian and Alon teaches the system as discussed above in claim 1, however Julian does not explicitly teach wherein the traffic violations comprise license plate tracking, facial recognition, facial tracking, traffic flow data, and combinations thereof.
Alon further teaches wherein the traffic violations comprise license plate tracking, facial recognition, facial tracking, traffic flow data, and combinations thereof (Paragraph [0017], “According to still further features in the described preferred embodiments the set of identification features can be a license plate, a vehicle model, a vehicle color or a face.”) (Paragraph [0021], “The relevant data may include the host vehicle speed, geographical location, time, close-up image of ID object, or any other relevant data.”).
Julian and Alon are analogous art as they are both generally related to monitoring vehicles and the surrounds of vehicles.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include wherein the traffic violations comprise license plate tracking, facial recognition, facial tracking, traffic flow data, and combinations thereof of Alon in the system for detecting traffic events of Julian with a reasonable expectation of success in order to improve the safety of the roadway by monitoring 360 degrees around the vehicle and providing complete evidence of traffic violations to authorities (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”).
Regarding claim 9, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein the non-transitory storage device further comprises a training module, wherein the training module trains the artificial neural networks to detect traffic violations based on previously manually classified images or series of images (Paragraph [0036], "The network may then be further trained using a custom dataset relevant to driver monitoring, which may contain images from cameras that were affixed to cars, and which may contain annotated cars, trucks, traffic lights, traffic signs, and the like. In addition, or alternatively, the dataset may contain images that do not have human annotations, which may be used for unsupervised or semi-supervised training. In some embodiments, the neural network may be configured to produce bounding boxes and class identifiers," here the neural networks of the system can be trained to detect events based on human annotations to classify the images) (Paragraph [0112], "In one configuration, hand coded rules could be used to determine initial training values for initializing a system. The system may then be further trained and updated using reinforcement learning," here the system can initially be trained using manually determined rules and then further trained using reinforcement learning).
Regarding claim 10, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein the previously classified images or series of images are manually classified according to laws, regulations, and combinations thereof (Paragraph [0036], "The network may then be further trained using a custom dataset relevant to driver monitoring, which may contain images from cameras that were affixed to cars, and which may contain annotated cars, trucks, traffic lights, traffic signs, and the like. In addition, or alternatively, the dataset may contain images that do not have human annotations, which may be used for unsupervised or semi-supervised training. In some embodiments, the neural network may be configured to produce bounding boxes and class identifiers," here the neural networks of the system is trained using annotated images that have been classified with object such as vehicles and traffic lights and signs which indicate laws and regulations).
Regarding claim 11, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein the previously classified images or series of images have been manually classified to comprise traffic violations (Paragraph [0120], "In this example, the second device may have trained a reinforcement learning model that may output a safe or unsafe action based on visual data, and an inferred action of the driver may be compared to the output of the model. Alternatively, or in addition, the reinforcement learning model may output a description of a visual scene corresponding to a safe action being taken in response to a first description of the visual scene. In some embodiments, the reinforcement learning model may be based on a sequence of previously detected objects that were detected in visual data by a deployed device. A reinforcement learning model may be trained and/or updated based on a human operator agreeing or disagreeing with an output of the model," here the system is initially classifying images which are then further manually classified by a user to agreeing or disagreeing with the classification).
Regarding claim 12, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein the non-transitory storage device and the CPU are located on the interior of the first vehicle (Paragraph [0045], "FIG. 1 illustrates an embodiment of the aforementioned devices, systems and methods for transmitting a descriptor of an event. The device 100 may include input sensors (which may include a forward facing camera 102, a driver facing camera 104, ... The compute capability may be a CPU or an integrated System-on-a-chip (SOC), ... The device may further include memory storage 114," here figure 1 is describing the device which is mounted on the vehicle which includes a storage device and a CPU).
Regarding claim 13, the combination of Julian and Alon teaches the system as discussed above in claim 1, Julian further teaches wherein the logic module determines if the one or more events of interest that were selected by the detector module comprise road hazards and when the one or more events of interest comprise road hazards (Paragraph [0080], “In one configuration, the driver monitoring system may visually detect and categorize potholes and objects in the road 256 and may assess the driver's response to those objects,” here the system is detecting events relating to detected road hazards such as potholes)
transmitting determined event information including road hazards to a remote location (Paragraph [0009], “The present disclosure also provides systems and methods for unsupervised learning of action values, monitoring of a driver's environment, and transmitting visual data and/or descriptors of visual data from a client to a server”).
However Julian does not explicitly teach identified video information is communicated to a local authority.
Alon further teaches identified video information is communicated to a local authority (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”) (Paragraph [0014], “According to still further features in the described preferred embodiments the system further includes (e) a reporting unit, which is operable to report the detected law violation, and can be either a local citation issuing unit or a remote citation issuing unit,” here while Julian doesn’t explicitly teach transmitting relevant information to a traffic authority, Alon teaches transmitting detected information to police, which could reasonably be combined with the system of Julian which detects a broader group of traffic events including road hazards).
Julian and Alon are analogous art as they are both generally related to monitoring vehicles and the surroundings of vehicles.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include wherein the video information identified is communicated to a local authority of Alon in the system for detecting traffic events of Julian with a reasonable expectation of success in order to improve the safety of the roadway by monitoring 360 degrees around the vehicle and providing complete evidence of traffic violations to authorities (Paragraph [0006], “Thus, there is a need for and it would be advantageous to have a system including multiple cameras mounted on a law enforcement vehicle or concealed therein, having side looking cameras, for automatically detecting and recording in real time traffic law violation events in a manner so as to provide evidentiary records for traffic violation enforcement purposes.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Higgins (US-20040252193) teaches automated traffic violation monitoring and reporting which determines time periods before and after a traffic event. Bulan (US-20160148058) teaches a method for detecting a vehicle performing a traffic violation in a region of interest. Ratti (US-20180211117) teaches an artificial intelligence based system and method for determination of traffic violations.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER FEES whose telephone number is (303)297-4343. The examiner can normally be reached Monday-Thursday 7:30 - 5:30 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached on (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER GEORGE FEES/Examiner, Art Unit 3662