DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Application status
This office action is in response to application filed on 08/16/2024 and Preliminary Amendment filed on 08/16/2024. Claims 1-2, 4-5, 7-9, 12-13, 15-17, 19-21, 23, 27, 29-30, and 32 are pending. Claims 1-2, 4-5, 7-9, 12-13, 15-17, 19-21, 23, 27, 29-30, and 32 are rejected.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in the parent Application No. PCT/EP2023/054219 filed on 02/20/2023 which claims priority to Application No. NL2031012 filed on 02/18/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/16/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
Specification
The use of the term “Bluetooth”, which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
Abstract
The abstract of the disclosure is objected to because of the inclusion of legal phraseology “said” and “means”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Such claim limitation(s) is/are:
-- “a sensing means configured to sense a sequence of signals over time related to the area” in claim 1, 4-5, 9, 23, 27, and 30
“a processing means configured to” in claim 1-2, 8-9, 12-13, 15-17, 19-21, 23, 27, 30, and 32.
The instant specification provides corresponding structure for “a sensing means configured to sense a sequence of signals over time related to the area” in page 4, paragraph 3. However, instant specification does not provide corresponding structure for “a processing means configured to”.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 contains the trademark/trade name “Bluetooth”. Where a trademark or trade name is used in a claim as a limitation to identify or describe a particular material or product, the claim does not comply with the requirements of 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. See Ex parte Simpson, 218 USPQ 1020 (Bd. App. 1982). The claim scope is uncertain since the trademark or trade name cannot be used properly to identify any particular material or product. A trademark or trade name is used to identify a source of goods, and not the goods themselves. Thus, a trademark or trade name does not identify or describe the goods associated with the trademark or trade name. In the present case, the trademark/trade name is used to identify/describe “a low-power radio wave” and, accordingly, the identification/ description is indefinite.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4, and 7-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bernhardt (US 20170186314 A1).
Regarding claim 1, Bernhardt teaches (Currently Amended) A system for determination of traffic flow information in an area comprising a traffic surface, such as a road surface or a pedestrian surface (Bernhardt, at least one para. 0004; “In general, example embodiments of the present invention provide an improved method of traffic lane and signal control identification and traffic flow management. [0039] Vehicle movement through an intersection may be captured in a variety of manners. [0034] A dual-phase traffic light may include, for example, a pedestrian walk/don't walk signal.”), said system comprising:
a sensing means configured to sense a sequence of signals over time related to the area (Bernhardt, at least one para. 0034; “The status of the signal phase and the timing of the state transitions of a traffic light may be collected in real-time (e.g., through traffic controller 10 or survey device 25)”, wherein the survey device 25 is the sensing means); and
a processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) configured to:
receive the sensed sequence of signals (Bernhardt, at least one para. 0045; “Probe data may be retrieved at 400, such as through collection of probe data by a survey device 25, from a memory (e.g., memory 62) of a traffic controller 10, and optionally the vehicle probe data may be periodically reported by a mobile device associated with a respective vehicle to a server for processing, such as a remotely located traffic controller 10.”) over time (Bernhardt, at least one para. 0032; “In general, example embodiments of the present invention may provide a method for determining and cataloging an identification of a traffic light controlling a lane or road's movement through an automated process via analysis of the signal phase and the timing information of the traffic light, and a comparison to vehicle movement through an intersection controlled by the traffic light over one or more periods of time.”) and optionally external data related to the area (Bernhardt, at least one para. 0046; “data on a road network, identifying the lanes that the vehicle was in while entering the intersection, exiting the intersection, and potentially intermediate points in the intersection at 430.”) and (Bernhardt, at least one para. 0034; “The status of the signal phase and the timing of the state transitions of a traffic light may be collected in real-time (e.g., through traffic controller 10 or survey device 25), or predicted through engineering analysis.”, wherein the external data is received from the traffic controller 10);
detect moving objects in the area based on the sensed sequence of signals over time (Bernhardt, at least one para. 0046; “Once the probe data is collected, the categorization of the vehicle may identify the probe detections from each vehicle individually. The probe data may be sorted for each vehicle by timestamp, to properly determine, for example, an intersection entry versus an intersection exit, or multiple intersection entrances and exits, at 420.”); and
determine traffic flow information related to said moving objects in the sensed sequence of signals over time optionally using the external data (Bernhardt, at least one para. 0046; “The collected probe data may be matched to data on a road network, identifying the lanes that the vehicle was in while entering the intersection, exiting the intersection, and potentially intermediate points in the intersection at 430. The path of the vehicle can then be discerned at 440 to plot a path of the vehicle through the intersection. This vehicle path is then used at 330 and 340 of FIG. 3 to identify and catalog the traffic lights and their relation to each lane of the intersection with regard to traffic flow, as described above.”).
Regarding claim 2, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the processing means is configured to determine traffic flow information by determining a model, optionally based on the external data; and determining the traffic flow information related to said moving objects in the sensed sequence of signals over time using the model (Bernhardt, at least one para. 0047; “In an example embodiment, an apparatus for performing the method of FIGS. 3 and 4 above may comprise a processor (e.g., the processor 40) configured to perform some or each of the operations (300-350 and/or 400-440) described above. The processor may, for example, be configured to perform the operations (300-350 and/or 400-440) by performing hardware implemented logical functions, executing stored instructions, or executing algorithms for performing each of the operations. Alternatively, the apparatus may comprise means for performing each of the operations described above. In this regard, according to an example embodiment, examples of means for performing operations 300-350 and/or 400-440 may comprise, for example, the processor 40 and/or a device or circuit for executing instructions or executing an algorithm for processing information as described above.”); and
wherein preferably the processing means is configured to receive new and/or updated external data related to the area and to update the model based on the new and/or updated external data (Bernhardt, at least one para. 0049; “As shown, through monitoring of vehicle paths through intersections based on traffic light states, a complete model of the intersection can be built and modeled to optimize traffic flow. Information from all intersections under the control of a traffic control entity (e.g., municipality or commercial entity) can be compiled to create a traffic flow model that may optimize traffic flow through an urban or suburban environment based on a time of day, event, or other traffic-influencing factor, in order to maximize throughput on existing roadways, minimize fuel consumption, and minimize driver frustration.”, wherein the process of compiling data to create the traffic flow model can be seen as updated external data).
Regarding claim 4, Bernhardt teaches (Currently Amended) The system of claims 1, wherein the sensing means comprises any one of the following: an image capturing means configured to capture a sequence of images, such as a visible light camera or a thermal camera (Bernhardt, at least one para. 0043; “According to some embodiments described herein, probe data may further comprise lane information. The lane information may be gathered by, for example, information derived from images captured by a camera, such as by sensor 49 of computing device 15 when configured as an image sensor.”), a LIDAR configured to capture a sequence of point clouds, a radar, a receiving means with an antenna configured to capture a sequence of signals (Bernhardt, at least one para. 0027; “The illustrated computing device 15 may include an antenna 32 (or multiple antennas) in operable communication with a transmitter 34 and a receiver 36.”), in particular a sequence of short-range signals, such as Bluetooth signals, a sound capturing means, or a combination thereof.
Regarding claim 7, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the external data comprises any one or more of the following: (Bernhardt, at least one para. 0034; “The status of the signal phase and the timing of the state transitions of a traffic light may be collected in real-time (e.g., through traffic controller 10 or survey device 25), or predicted through engineering analysis. The signal phase may include the signal that is presented to a motorist, pedestrian, cyclist, etc., at an intersection. Traffic lights may include various phases. For example, a single-phase traffic light may include a flashing amber or red light indicating right-of-way at an intersection, or a green or red arrow to indicate a protected or prohibited turn. A dual-phase traffic light may include, for example, a pedestrian walk/don't walk signal. A three-phase traffic light may include a conventional green/amber/red traffic light. Embodiments described herein may pertain to all traffic light phases and is not limited to the brief description of phases above. The state transitions may include transitions between phases at a traffic light. A traffic light changing from green to amber is a first state transition, while changing from amber to red is a second state transition. The collected signal phase and timing of the state transitions may be provided through communication protocols either directly to interested users, or through a distribution network shown in FIG. 1.”).
Regarding claim 8, Bernhardt teaches (Currently Amended) The system claim 1, wherein the processing means is configured to receive the external data from any one or more of the following external sources: (Bernhardt, at least one para. 0020; “The traffic controller 10 may be located proximate the intersection of the traffic light, or the traffic controller may be located remotely from the controlled traffic light and in communication with the traffic light through various types of wired or wireless communications, as further described below. The system may further include a network server 20 that is in communication with the traffic controller, such as via network 30, to provide information and commands to the traffic controller, and/or to receive information and data from the traffic controller, such as traffic volumes, hardware issues, or various other information that may be useful in the control of a traffic system.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Bernhardt (US 20170186314 A1), and further in view of KORJUS (US 20210343143 A1).
Regarding claim 5, Bernhardt teaches the limitations of claim 1, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the sensing means is configured to sense signals of the sequence of signals over time at consecutive time intervals (Bernhardt, at least one para. 0042; “At 320, the detected probes at the intersection may be extracted for a plurality of cycles of each state. For example, the probe data from traffic flowing through the intersection at a first stage may be collected for 5 to 10 cycles of that first stage.”), , said sequence of images preferably comprising images of at least one lane of the road surface and/or images of the pedestrian surface (Bernhardt, at least one para. 0034; “The status of the signal phase and the timing of the state transitions of a traffic light may be collected in real-time (e.g., through traffic controller 10 or survey device 25), or predicted through engineering analysis. The signal phase may include the signal that is presented to a motorist, pedestrian, cyclist, etc., at an intersection. ”) and (Bernhardt, at least one para. 0043; “According to some embodiments described herein, probe data may further comprise lane information. The lane information may be gathered by, for example, information derived from images captured by a camera”).
Bernhardt does not explicitly teach preferably at least two signals per second, more preferably at least 10 signals per second; and/or wherein the sensing means comprises an image capturing means configured to capture a sequence of images related to the area at consecutive time intervals, preferably at least two frames per second, more preferably at least 10 frames per second
KORJUS, in the same field of endeavor (KORJUS, at least one para. 0001; “The invention relates to an autonomous or semiautonomous robot. The invention also relates to a method and system for traffic light detection and usage.”) teaches preferably at least two signals per second, more preferably at least 10 signals per second; and/or wherein the sensing means comprises an image capturing means configured to capture a sequence of images related to the area at consecutive time intervals, preferably at least two frames per second, more preferably at least 10 frames per second (KORJUS, at least one para. 0288; “The speed at which successive images are captured can depend on the camera or can be a programmable parameter. In yet another example, the images are taken by at least one camera that records a video at an intersection. Thus, the images are realized as video frames. The videos can comprise different frame rates (i.e. number of still images per unit of time), such as 1, 6, 8, 24, 60, 120 or more frames per second.”)
Bernhardt and KORJUS are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the sensing means of Bernhardt with teaching of KORJUS. One of the ordinary skill in the art would have been motivated to make this modification so that better and more accurate detection of the movements in the images can be attain. (KORJUS, at least one para. 0289).
Claim(s) 9, 29, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Bernhardt (US 20170186314 A1), and further in view of HEILBRON (US 20230236037 A1).
Regarding claim 9, Bernhardt teaches the limitations of claim 1, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to:
detect one or more infrastructure elements in the area based on the sensed sequence of signals over time; and
determine the one or more infrastructure elements in the sensed sequence of signals over time optionally using the external data;
wherein preferably the processing means is configured to:
receive a position and sensing direction of the sensing means; and
determine the one or more infrastructure elements in the sensed sequence of signals over time using the position and sensing direction;
wherein preferably the external data comprises a map of the area, and the processing means is configured to determine the one or more infrastructure elements by adjusting the map to the sensed sequence of signals over time using the position and sensing direction of the sensing means.
Bernhardt does not explicitly teach detect one or more infrastructure elements in the area based on the sensed sequence of signals over time; and
determine the one or more infrastructure elements in the sensed sequence of signals over time optionally using the external data;
wherein preferably the processing means is configured to:
receive a position and sensing direction of the sensing means; and
determine the one or more infrastructure elements in the sensed sequence of signals over time using the position and sensing direction;
wherein preferably the external data comprises a map of the area, and the processing means is configured to determine the one or more infrastructure elements by adjusting the map to the sensed sequence of signals over time using the position and sensing direction of the sensing means.
HEILBRON, in the same field of endeavor (HEILBRON, at least one para. 0002; “The present disclosure relates generally to vehicle navigation and, more specifically, to systems and methods for generating a common speed profile for use in navigation by a host vehicle.”) teaches detect one or more infrastructure elements in the area (HEILBRON, at least one para. 0077; “an autonomous vehicle may include a camera and a processing unit that analyzes visual information captured from the environment of the vehicle. The visual information may include, for example, components of the transportation infrastructure (e.g., lane markings, traffic signs, traffic lights, etc.) that are observable by drivers and other obstacles (e.g., other vehicles, pedestrians, debris, etc.).”) based on the sensed sequence of signals over time (HEILBRON, at least one para. 0107; “The first image capture device 122 may acquire a plurality of first images relative to a scene associated with the vehicle 200. Each of the plurality of first images may be acquired as a series of image scan lines”); and
determine the one or more infrastructure elements in the sensed sequence of signals over time optionally using the external data (HEILBRON, at least one para. 0138; “The second processing device may receive images from main camera and perform vision processing to detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects.”);
wherein preferably the processing means is configured to:
receive a position and sensing direction of the sensing means (HEILBRON, at least one para. 0114; “Each image capture device 122, 124, and 126 may be positioned at any suitable position and orientation relative to vehicle 200. The relative positioning of the image capture devices 122, 124, and 126 may be selected to aid in fusing together the information acquired from the image capture devices.”); and
determine the one or more infrastructure elements in the sensed sequence of signals over time using the position and sensing direction (HEILBRON, at least one para. 0138; “The second processing device may receive images from main camera and perform vision processing to detect other vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects. Additionally, the second processing device may calculate a camera displacement and, based on the displacement, calculate a disparity of pixels between successive images and create a 3D reconstruction of the scene (e.g., a structure from motion).”);
wherein preferably the external data comprises a map of the area (HEILBRON, at least one para. 0161; “processing unit 110 may consider the position and motion of other vehicles, the detected road edges and barriers, and/or general road shape descriptions extracted from map data (such as data from map database 160).”), and the processing means is configured to determine the one or more infrastructure elements by adjusting the map to the sensed sequence of signals over time using the position and sensing direction of the sensing means (HEILBRON, at least one para. 0145; “monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle. Based on the analysis, system 100 (e.g., via processing unit 110) may cause one or more navigational responses in vehicle 200, such as a turn, a lane shift, a change in acceleration, and the like, as discussed below in connection with navigational response module 408.”).
Bernhardt and HEILBRON are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of HEILBRON to detect infrastructure elements. One of the ordinary skill in the art would have been motivated to make this modification so that fallen road signs or any other structural element that can be hazards can be identified (HEILBRON, at least one para. 0151).
Regarding claim 29, Bernhardt teaches the limitations of claim 1, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to:
Bernhardt does not explicitly teach detect one or more persons on the pedestrian surface based on the sensed sequence of signals over time; and
determine traffic flow information related to said one or more persons in the sensed sequence of signals over time optionally using the external data.
HEILBRON, in the same field of endeavor (HEILBRON, at least one para. 0002; “The present disclosure relates generally to vehicle navigation and, more specifically, to systems and methods for generating a common speed profile for use in navigation by a host vehicle.”) teaches detect one or more persons on the pedestrian surface based on the sensed sequence of signals over time (HEILBRON, at least one para. 0148; “In one embodiment, navigational response module 408 may store software executable by processing unit 110 to determine a desired navigational response based on data derived from execution of monocular image analysis module 402 and/or stereo image analysis module 404. Such data may include position and speed information associated with nearby vehicles, pedestrians, and road objects, target position information for vehicle 200, and the like.”); and
determine traffic flow information related to said one or more persons in the sensed sequence of signals over time optionally using the external data (HEILBRON, at least one para. 0148; “Additionally, in some embodiments, the navigational response may be based (partially or fully) on map data, a predetermined position of vehicle 200, and/or a relative velocity or a relative acceleration between vehicle 200 and one or more objects detected from execution of monocular image analysis module 402 and/or stereo image analysis module 404.”).
Bernhardt and HEILBRON are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of HEILBRON to determine traffic flow using external data. One of the ordinary skill in the art would have been motivated to make this modification so that varying road hazards can be identified (HEILBRON, at least one para. 0151).
Regarding claim 30, The combination of Bernhardt and HEILBRON teaches the limitations of claim 29, upon which the instant claim depends, as discussed supra. Further, HEILBRON teaches (Currently Amended) The system of claim 29, wherein the processing means is configured to:
receive a position and sensing direction of the sensing means (HEILBRON, at least one para. 0145; “monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle. Based on the analysis, system 100 (e.g., via processing unit 110) may cause one or more navigational responses in vehicle 200, such as a turn, a lane shift, a change in acceleration, and the like, as discussed below in connection with navigational response module 408.”); and
determine the traffic flow information related to said one or more persons in the sensed sequence of signals over time using the position and sensing direction (HEILBRON, at least one para. 0148; “Navigational response module 408 may also determine a desired navigational response based on sensory input (e.g., information from radar) and inputs from other systems of vehicle 200, such as throttling system 220, braking system 230, and steering system 240 of vehicle 200. Based on the desired navigational response, processing unit 110 may transmit electronic signals to throttling system 220, braking system 230, and steering system 240 of vehicle 200 to trigger a desired navigational response by, for example, turning the steering wheel of vehicle 200 to achieve a rotation of a predetermined angle. In some embodiments, processing unit 110 may use the output of navigational response module 408 (e.g., the desired navigational response) as an input to execution of velocity and acceleration module 406 for calculating a change in speed of vehicle 200.”);
wherein preferably the external data comprises a map of the pedestrian surface (HEILBRON, at least one para. 0161; “processing unit 110 may consider the position and motion of other vehicles, the detected road edges and barriers, and/or general road shape descriptions extracted from map data (such as data from map database 160).”), and the processing means is configured to determine the one or more persons by adjusting the map to the sensed sequence of signals over time using the position and sensing direction of the sensing means (HEILBRON, at least one para. 0145; “monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle. Based on the analysis, system 100 (e.g., via processing unit 110) may cause one or more navigational responses in vehicle 200, such as a turn, a lane shift, a change in acceleration, and the like, as discussed below in connection with navigational response module 408.”).
Claim(s) 12, 13, 15, 16, 19, and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Bernhardt (US 20170186314 A1), and further in view of Bulan (US 20150117704 A1).
Regarding claim 12, Bernhardt teaches the limitations of claim 1, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to:
determine at least one virtual barrier within the traffic surface; and
count a number of detected moving objects crossing the at least one virtual barrier
Bernhardt does not explicitly teach determine at least one virtual barrier within the traffic surface; and
count a number of detected moving objects crossing the at least one virtual barrier
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches determine at least one virtual barrier within the traffic surface (Bernhardt, at least one para. 0054; “Step 1) In an initialization step, determine the location of a virtual target area within a field of view of a camera, where the field of view spans a region of interest.”); and
count a number of detected moving objects crossing the at least one virtual barrier (Bernhardt, at least one para. 0058; “Step 5) If a moving vehicle is detected in the virtual target area, classify the detected vehicle into one of bus or other non-bus vehicle category(ies).”, it is obvious that classifying a vehicle into multiple categories also keep track of number of vehicles)
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of Bulan to count the number of moving objects within at least one virtual barrier. One of the ordinary skill in the art would have been motivated to make this modification so vehicles categories can be used to further facilitate future searches or rapid retrieval of evidentiary imagery to conclude traffic violations (Bulan, at least one para. 0061).
Regarding claim 13, Bernhardt teaches the limitations of claim 1, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to:
determine a set of virtual barriers defining together an enclosed area within the traffic surface; and
determine a difference between moving objects entering the enclosed area and moving objects exiting said enclosed area;
wherein preferably the processing means is configured to:
assign a +1 value to moving objects entering the enclosed area;
assign a -1 value to moving objects exiting the enclosed area; and
determine the difference by summing all values assigned to said moving objects.
Bernhardt does not explicitly teach determine a set of virtual barriers defining together an enclosed area within the traffic surface; and
determine a difference between moving objects entering the enclosed area and moving objects exiting said enclosed area;
wherein preferably the processing means is configured to:
assign a +1 value to moving objects entering the enclosed area;
assign a -1 value to moving objects exiting the enclosed area; and
determine the difference by summing all values assigned to said moving objects.
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches determine a set of virtual barriers defining together an enclosed area within the traffic surface (Bulan, at least one para. 0066; “Typical virtual target areas include, but are not limited to, single or multiple virtual polygons, usually one per monitored traffic lane.”); and
determine a difference between moving objects entering the enclosed area and moving objects exiting said enclosed area (Bulan, at least one para. 0066; “The region of interest may be the bus lane, and may include surrounding areas. The surrounding areas may be used for operations such as determining if a non-bus vehicle used the lane for turning. It may also be used to improve identification of vehicles by, for example, acquiring video images as the vehicle enters or exits the bus lane. Typical virtual target areas include, but are not limited to, single or multiple virtual polygons, usually one per monitored traffic lane. The location of the virtual target areas is typically input manually as it depends on the geometric configuration of the specific camera setup. The virtual polygon is used for both occlusion and vehicle detection.”);
wherein preferably the processing means is configured to: assign a +1 value to moving objects entering the enclosed area; assign a -1 value to moving objects exiting the enclosed area; and determine the difference by summing all values assigned to said moving objects (Bulan, at least one para. 0066; “The region of interest may be the bus lane, and may include surrounding areas. The surrounding areas may be used for operations such as determining if a non-bus vehicle used the lane for turning. It may also be used to improve identification of vehicles by, for example, acquiring video images as the vehicle enters or exits the bus lane. Typical virtual target areas include, but are not limited to, single or multiple virtual polygons, usually one per monitored traffic lane. The location of the virtual target areas is typically input manually as it depends on the geometric configuration of the specific camera setup. The virtual polygon is used for both occlusion and vehicle detection.”) and (Bulan, at least one para. 0118; “Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits performed by conventional computer components, including a central processing unit (CPU), memory storage devices for the CPU, and connected display devices. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is generally perceived as a self-consistent sequence of steps leading to a desired result.”) .
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of Bulan to determine the number of moving objects within the target area at any given time. The claim would have been obvious because “a person of ordinary skill has good reason to pursue the known options within his or her technical grasp. If this leads to the anticipated success, it is likely that product [was] not of innovation but of ordinary skill and common sense” (KSR).
Regarding claim 15, Bernhardt teaches the limitations of claim 1, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to:
detect vehicles on the road surface based on the sensed sequence of signals over time; and
determine at least one lane of the road surface in the sensed sequence of signals over time, optionally using the external data.
Bernhardt does not explicitly teach detect vehicles on the road surface based on the sensed sequence of signals over time; and
determine at least one lane of the road surface in the sensed sequence of signals over time, optionally using the external data.
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches detect vehicles on the road surface based on the sensed sequence of signals over time (Bulan, at least one para. 0101; “An image sequence was captured using Vivotek IP8352 surveillance camera on a local road. The captured image sequence had a frame rate of 30 fps and a resolution of 900.times.720 pixels. The captured image sequence included an instance of an automobile going straight across an intersection and another instance of an automobile making a right turn at the intersection.”); and
determine at least one lane of the road surface in the sensed sequence of signals over time, optionally using the external data (Bulan, at least one para. 0102; “Then, a centroid tracking algorithm was executed using the active motion vectors as previously described in Step 6. FIG. 20 shows the image sequence of an automobile going across the intersection.”).
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of Bulan to count the number of moving objects within at least one virtual barrier. One of the ordinary skill in the art would have been motivated to make this modification so that detection of other traffic violations can be determined (Bulan, at least one para. 0103).
Regarding claim 16, The combination of Bernhardt and Bulan teaches the limitations of claim 15, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 15, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to determine if a lane of the determined at least one lane has a side turn optionally using the external data, and if yes, determine a first virtual barrier before the side turn and a second virtual barrier after the side turn.
Bernhardt does not explicitly teach determine if a lane of the determined at least one lane has a side turn optionally using the external data, and if yes, determine a first virtual barrier before the side turn and a second virtual barrier after the side turn.
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches determine if a lane of the determined at least one lane has a side turn optionally using the external data, and if yes, determine a first virtual barrier before the side turn and a second virtual barrier after the side turn (Bulan, at least one para. 0102; “FIG. 22 shows an image sequence of an automobile making a right turn at the intersection and FIG. 23 shows its calculated trajectory. As shown in FIGS. 21 and 23, a non-bus vehicle driving in the bus lane and making a right turn can be distinguished from a violator from its calculated trajectory.”, wherein the left and right edges of the region of interest are the first and second virtual barriers).
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of Bulan to count the number of moving objects within at least one virtual barrier. One of the ordinary skill in the art would have been motivated to make this modification so that detection of other traffic violations can be determined (Bulan, at least one para. 0103).
Regarding claim 19, The combination of Bernhardt and Bulan teaches the limitations of claim 15, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 15, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured:
determine determine at least one virtual barrier within the determined at least one lane; and
determine when a detected vehicle passes the at least one virtual barrier.
Bernhardt does not explicitly teach determine determine at least one virtual barrier within the determined at least one lane; and
determine when a detected vehicle passes the at least one virtual barrier.
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches determine at least one virtual barrier within the determined at least one lane (Bulan, at least one para. 0066; “Typical virtual target areas include, but are not limited to, single or multiple virtual polygons, usually one per monitored traffic lane.”); and
determine when a detected vehicle passes the at least one virtual barrier (Bulan, at least one para. 0066; “The region of interest may be the bus lane, and may include surrounding areas. The surrounding areas may be used for operations such as determining if a non-bus vehicle used the lane for turning. It may also be used to improve identification of vehicles by, for example, acquiring video images as the vehicle enters or exits the bus lane.”).
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of Bulan to detect a moving objects within at least one virtual barrier. One of the ordinary skill in the art would have been motivated to make this modification so that detection of other traffic violations can be determined (Bulan, at least one para. 0103).
Regarding claim 32, Bernhardt teaches the limitations of claim 1, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 1, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured:
detect a stationary vehicle on the road surface based on the sensed sequence of signals over time;
detect persons in a portion of the area that surrounds the stationary vehicle based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time that enter or exit the stationary vehicle; and/or
wherein the processing means is configured to:
detect persons in a portion of the area that surrounds the road surface based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time; and/or wherein the processing means is configured to:
detect persons on the pedestrian surface based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time.
Bernhardt does not explicitly teach detect a stationary vehicle on the road surface based on the sensed sequence of signals over time;
detect persons in a portion of the area that surrounds the stationary vehicle based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time that enter or exit the stationary vehicle; and/or
wherein the processing means is configured to:
detect persons in a portion of the area that surrounds the road surface based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time; and/or wherein the processing means is configured to:
detect persons on the pedestrian surface based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time.
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches detect a stationary vehicle on the road surface based on the sensed sequence of signals over time (Bernhardt, at least one para. 0007; “Another common exception is that a non-emergency non-bus vehicle can drive in a bus line to make a quick drop off or pick up of a passenger(s).”, in other words, when a passenger is dropped off or picked up, the vehicle becomes stationary);
detect persons in a portion of the area that surrounds the stationary vehicle based on the sensed sequence of signals over time (Bernhardt, at least one para. 0029; “Another common exception is that a non-emergency non-bus vehicle can drive in a bus line to make a quick drop off or pick up of a passenger(s).”, in other words, when a vehicle becomes stationary, sensing mean is able to detect the passenger getting off from the stationary vehicle or getting on to the stationary vehicle); and
determine an amount of detected persons in the sensed sequence of signals over time that enter or exit the stationary vehicle (Bernhardt, at least one para. 0029; “Another common exception is that a non-emergency non-bus vehicle can drive in a bus line to make a quick drop off or pick up of a passenger(s).”, wherein the term “passenger(s)” indicate there can be more than one passenger); and/or
wherein the processing means is configured to:
detect persons in a portion of the area that surrounds the road surface based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time; and/or wherein the processing means is configured to:
detect persons on the pedestrian surface based on the sensed sequence of signals over time; and
determine an amount of detected persons in the sensed sequence of signals over time.
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of Bulan to detect passenger pick up or drop off within at least one virtual barrier. One of the ordinary skill in the art would have been motivated to make this modification so that the non-emergency or non-bus vehicle does not get a traffic violation based on the passenger drop off or pick up exception (Bulan, at least one para. 0007).
Claim(s) 17, 20-21, 23, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Bernhardt (US 20170186314 A1), and further in view of Bulan (US 20150117704 A1) and HEILBRON (US 20230236037 A1).
Regarding claim 17, The combination of Bernhardt and Bulan teaches the limitations of claim 15, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 15, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to:
determine at least one first virtual barrier within the determined at least one lane;
determine at least one second virtual barrier within the determined at least one lane;
measure a time difference between a first time at which a detected vehicle passes one of the at least one first virtual barrier and a second time at which the detected vehicle passes one of the at least one second virtual barrier; and
determine an average speed of the detected vehicle using the external data and the time difference;
wherein preferably the processing means is configured to:
determine a calibration function using a set of known average speeds from the external data and a set of measured time differences corresponding to the set of known average speeds; and
determine the average speed of the detected vehicle using the calibration function.
Bernhardt does not explicitly teach determine at least one first virtual barrier within the determined at least one lane;
determine at least one second virtual barrier within the determined at least one lane;
measure a time difference between a first time at which a detected vehicle passes one of the at least one first virtual barrier and a second time at which the detected vehicle passes one of the at least one second virtual barrier; and
determine an average speed of the detected vehicle using the external data and the time difference;
wherein preferably the processing means is configured to:
determine a calibration function using a set of known average speeds from the external data and a set of measured time differences corresponding to the set of known average speeds; and
determine the average speed of the detected vehicle using the calibration function.
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches determine at least one first virtual barrier within the determined at least one lane (Bulan, at least one para. 0092; “Centroid tracking is illustrated in FIGS. 17-19 where the images in FIG. 17 show time instances of a non-bus vehicle, i.e., a car, passing across a scene being monitored. The images in FIG. 18 are the corresponding active motion vectors, i.e., compression type, at each time instance shown in FIG. 17.”);
determine at least one second virtual barrier within the determined at least one lane (Bulan, at least one para. 0092; “Centroid tracking is illustrated in FIGS. 17-19 where the images in FIG. 17 show time instances of a non-bus vehicle, i.e., a car, passing across a scene being monitored. The images in FIG. 18 are the corresponding active motion vectors, i.e., compression type, at each time instance shown in FIG. 17.”);
measure a time difference between a first time at which a detected vehicle passes one of the at least one first virtual barrier and a second time at which the detected vehicle passes one of the at least one second virtual barrier (Bulan, at least one para. 0093; “The image in FIG. 19 shows all the calculated centroids at different time instances on an image.”); and
determine an average speed of the detected vehicle using the external data and the time difference;
wherein preferably the processing means is configured to:
determine a calibration function using a set of known average speeds from the external data and a set of measured time differences corresponding to the set of known average speeds; and
determine the average speed of the detected vehicle using the calibration function.
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of Bulan to track the moving object within at least one virtual barrier with respect to different time instants. One of the ordinary skill in the art would have been motivated to make this modification so that to calculate the estimated trajectory of the vehicle (Bulan, at least one para. 0093).
the combination of Bernhardt and Bulan does not explicitly teach determine an average speed of the detected vehicle using the external data and the time difference;
wherein preferably the processing means is configured to:
determine a calibration function using a set of known average speeds from the external data and a set of measured time differences corresponding to the set of known average speeds; and
determine the average speed of the detected vehicle using the calibration function.
HEILBRON, in the same field of endeavor (HEILBRON, at least one para. 0002; “The present disclosure relates generally to vehicle navigation and, more specifically, to systems and methods for generating a common speed profile for use in navigation by a host vehicle.”) teaches determine an average speed of the detected vehicle using the external data and the time difference (HEILBRON, at least one para. 0407; “Aggregated common speed profile 3200 may be represented in various forms. In some embodiments, aggregated common speed profile 3200 may include stored speed values for all vehicles at each position indicator, as shown in FIG. 32. In other words, for each position indicator along road segment 3010, server 3040 may store a group of speed values collected from vehicles traveling along the road segment at that position indicator. In some embodiments, server 3040 may further process these stored speed values. For example, this may include averaging the collected speed values at each of the position indicators. As shown in FIG. 32 this may include determining an aggregated common speed at each of the position indicators. For example, for position indicator 3230, server 3040 may determine a common speed value 3230, which may represent an average of speeds for different vehicles associated with position indicator 3230. Common speed values may similarly be determined for each of the position indicators, as shown. In some embodiments, server 3040 may determine a model or curve 3220 representing these common speed values. For example, this may be a line of best fit, such as a linear, polynomial, exponential, logarithmic, or power curve used to model the common speeds of the vehicles overtime. Accordingly, when navigating along road segment 3010 at a location associated with position 3230, host vehicle 3010 may use speed value 3232 as a target speed for navigation”) and (HEILBRON, at least one para. 0399; “Alternatively or additionally, the speed data may be represented by position or location information with associated timestamp information. Accordingly, server 3030 may determine the speed data based on the change in position of vehicles 3020 and 3030 over time.”);
wherein preferably the processing means is configured to:
determine a calibration function using a set of known average speeds from the external data and a set of measured time differences corresponding to the set of known average speeds (HEILBRON, at least one para. 0407; “server 3040 may further process these stored speed values. For example, this may include averaging the collected speed values at each of the position indicators. As shown in FIG. 32 this may include determining an aggregated common speed at each of the position indicators. For example, for position indicator 3230, server 3040 may determine a common speed value 3230, which may represent an average of speeds for different vehicles associated with position indicator 3230. Common speed values may similarly be determined for each of the position indicators, as shown. In some embodiments, server 3040 may determine a model or curve 3220 representing these common speed values.”); and
determine the average speed of the detected vehicle using the calibration function (HEILBRON, at least one para. 0407; “Accordingly, when navigating along road segment 3010 at a location associated with position 3230, host vehicle 3010 may use speed value 3232 as a target speed for navigation”).
The combination of Bernhardt, Bulan, and HEILBRON are considered to be analogous to the claimed invention because of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of HEILBRON to track the moving object within at least one virtual barrier with respect to average speed of the moving object. One of the ordinary skill in the art would have been motivated to make this modification so that a target speed can be determined for the moving vehicle to safely navigate within the at least one virtual barrier (HEILBRON, at least one para. 0407).
Regarding claim 20, The combination of Bernhardt and Bulan teaches the limitations of claim 15, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 15, wherein the determined at least one lane comprises at least two determined lanes (Bernhardt, at least one para. 0046; “Once the probe data is collected, the categorization of the vehicle may identify the probe detections from each vehicle individually. The probe data may be sorted for each vehicle by timestamp, to properly determine, for example, an intersection entry versus an intersection exit, or multiple intersection entrances and exits, at 420. The collected probe data may be matched to data on a road network, identifying the lanes that the vehicle was in while entering the intersection, exiting the intersection, and potentially intermediate points in the intersection at 430.”);
wherein the processing means is configured to:
determine within each of the determined at least two lanes a first virtual barrier and a second virtual barrier;
measure for each of the determined at least two lanes a respective time difference between a first time at which a respective detected vehicle passes the first virtual barrier and a second time at which the respective detected vehicle passes the second virtual barrier; and
determine a respective average speed of each of the respective detected vehicles on each of the determined at least two lanes using the external data and the at least two respective time differences.
Bernhardt does not explicitly teach wherein the processing means is configured to:
determine within each of the determined at least two lanes a first virtual barrier and a second virtual barrier;
measure for each of the determined at least two lanes a respective time difference between a first time at which a respective detected vehicle passes the first virtual barrier and a second time at which the respective detected vehicle passes the second virtual barrier; and
determine a respective average speed of each of the respective detected vehicles on each of the determined at least two lanes using the external data and the at least two respective time differences.
Bulan, in the same field of endeavor (Bulan, at least one para. 0027; “In one embodiment of this disclosure, described is a computer implemented method of estimating a trajectory of a moving vehicle captured with an image capturing device and determining if the moving vehicle is moving in one of a permitted manner and an unpermitted manner, the image capturing device oriented to include a field of view spanning a vehicle detection target region”) teaches wherein the processing means is configured to:
determine within each of the determined at least two lanes a first virtual barrier and a second virtual barrier (Bulan, at least one para. 0030; “FIG. 1 is a diagram of a plurality of traffic lanes including a bus-only lane.”) and (Bulan, at least one para. 0092; “Centroid tracking is illustrated in FIGS. 17-19 where the images in FIG. 17 show time instances of a non-bus vehicle, i.e., a car, passing across a scene being monitored. The images in FIG. 18 are the corresponding active motion vectors, i.e., compression type, at each time instance shown in FIG. 17.”);
measure for each of the determined at least two lanes a respective time difference between a first time at which a respective detected vehicle passes the first virtual barrier and a second time at which the respective detected vehicle passes the second virtual barrier; and
determine a respective average speed of each of the respective detected vehicles on each of the determined at least two lanes using the external data and the at least two respective time differences.
Bernhardt and Bulan are both considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the multiple lane sensing means of Bernhardt with teaching of Bulan to track the moving object within at least one virtual barrier with respect to multiple lanes. One of the ordinary skill in the art would have been motivated to make this modification so that to calculate the estimated trajectory of multiple vehicles across multiple lanes (Bulan, at least one para. 0093).
the combination of Bernhardt and Bulan does not explicitly teach measure for each of the determined at least two lanes a respective time difference between a first time at which a respective detected vehicle passes the first virtual barrier and a second time at which the respective detected vehicle passes the second virtual barrier; and
determine a respective average speed of each of the respective detected vehicles on each of the determined at least two lanes using the external data and the at least two respective time differences.
HEILBRON, in the same field of endeavor (HEILBRON, at least one para. 0002; “The present disclosure relates generally to vehicle navigation and, more specifically, to systems and methods for generating a common speed profile for use in navigation by a host vehicle.”) teaches measure for each of the determined at least two lanes a respective time difference between a first time at which a respective detected vehicle passes the first virtual barrier and a second time at which the respective detected vehicle passes the second virtual barrier (HEILBRON, at least one para. 0399; “Alternatively or additionally, the speed data may be represented by position or location information with associated timestamp information. Accordingly, server 3030 may determine the speed data based on the change in position of vehicles 3020 and 3030 over time.”); and
determine a respective average speed of each of the respective detected vehicles on each of the determined at least two lanes using the external data and the at least two respective time differences (HEILBRON, at least one para. 0407; “As shown in FIG. 32 this may include determining an aggregated common speed at each of the position indicators. For example, for position indicator 3230, server 3040 may determine a common speed value 3230, which may represent an average of speeds for different vehicles associated with position indicator 3230. Common speed values may similarly be determined for each of the position indicators”).
The combination of Bernhardt, Bulan, and HEILBRON are considered to be analogous to the claimed invention because of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the multiple lanes sensing means of Bernhardt with teaching of HEILBRON to track the moving object within at least one virtual barrier with respect to average speed of the moving object. One of the ordinary skill in the art would have been motivated to make this modification so that a target speed can be determined for the moving vehicle to safely navigate within the at least one virtual barrier (HEILBRON, at least one para. 0407).
Regarding claim 21, The combination of Bernhardt, Bulan, and HEILBRON teaches the limitations of claim 20, upon which the instant claim depends, as discussed supra. Further, HEILBRON teaches (Currently Amended) The system of claim 20, wherein the processing means is configured to:
acquire a known average speed from the external data over the determined at least two lanes (HEILBRON, at least one para. 0407; “Aggregated common speed profile 3200 may be represented in various forms. In some embodiments, aggregated common speed profile 3200 may include stored speed values for all vehicles at each position indicator, as shown in FIG. 32. In other words, for each position indicator along road segment 3010, server 3040 may store a group of speed values collected from vehicles traveling along the road segment at that position indicator.”);
average the at least two respective time differences to obtain an average time difference over the determined at least two lanes (HEILBRON, at least one para. 0407; “In some embodiments, server 3040 may further process these stored speed values. For example, this may include averaging the collected speed values at each of the position indicators. As shown in FIG. 32 this may include determining an aggregated common speed at each of the position indicators.”); and
determine the respective average speed of each of the respective detected vehicles on each of the determined at least two lanes by comparing the average time difference with the known average speed (HEILBRON, at least one para. 0407; “Accordingly, when navigating along road segment 3010 at a location associated with position 3230, host vehicle 3010 may use speed value 3232 as a target speed for navigation”);
wherein preferably the processing means is configured to:
calibrate the average time difference with the known average speed to determine a calibrated distance between the first virtual barrier and the second virtual barrier (HEILBRON, at least one para. 0407; “In some embodiments, server 3040 may determine a model or curve 3220 representing these common speed values. For example, this may be a line of best fit, such as a linear, polynomial, exponential, logarithmic, or power curve used to model the common speeds of the vehicles overtime”); and
determine the respective average speed of each of the respective detected vehicles on each of the determined at least two lanes by dividing the calibrated distance by the respective time difference (HEILBRON, at least one para. 0407; “In some embodiments, server 3040 may determine a model or curve 3220 representing these common speed values. For example, this may be a line of best fit, such as a linear, polynomial, exponential, logarithmic, or power curve used to model the common speeds of the vehicles overtime”).
Regarding claim 23, The combination of Bernhardt and Bulan teaches the limitations of claim 15, upon which the instant claim depends, as discussed supra. Further, Bernhardt teaches (Currently Amended) The system of claim 15, wherein the processing means (Bernhardt, at least one para. 0029; “In an example embodiment, the processor 40 may be configured to execute instructions stored in the memory device 60 or otherwise accessible to the processor 40.”) is configured to:
receive a position and sensing direction of the sensing means; and
determine the at least one lane of the road surface in the sensed sequence of signals over time further using the position and sensing direction; and/or
wherein the processing means is configured to determine a type of each of the determined at least one lane optionally using the external data; and/or
wherein the processing means is configured to associate each of the detected vehicles with a corresponding lane of the determined at least one lane, wherein preferably the processing means is configured to classify portions of the signals belonging to each of the detected vehicles in the sensed sequence of signals over time into respective lanes of the determined at least one lane; and/or
wherein the processing means is configured to determine a movement direction of the detected vehicles within the determined at least one lane.
The combination of Bernhardt and Bulan does not explicitly teach receive a position and sensing direction of the sensing means; and
determine the at least one lane of the road surface in the sensed sequence of signals over time further using the position and sensing direction; and/or
wherein the processing means is configured to determine a type of each of the determined at least one lane optionally using the external data; and/or
wherein the processing means is configured to associate each of the detected vehicles with a corresponding lane of the determined at least one lane, wherein preferably the processing means is configured to classify portions of the signals belonging to each of the detected vehicles in the sensed sequence of signals over time into respective lanes of the determined at least one lane; and/or
wherein the processing means is configured to determine a movement direction of the detected vehicles within the determined at least one lane.
HEILBRON, in the same field of endeavor (HEILBRON, at least one para. 0002; “The present disclosure relates generally to vehicle navigation and, more specifically, to systems and methods for generating a common speed profile for use in navigation by a host vehicle.”) teaches receive a position and sensing direction of the sensing means (HEILBRON, at least one para. 0145; “monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and any other feature associated with an environment of a vehicle. Based on the analysis, system 100 (e.g., via processing unit 110) may cause one or more navigational responses in vehicle 200, such as a turn, a lane shift, a change in acceleration, and the like, as discussed below in connection with navigational response module 408.”); and
determine the at least one lane of the road surface in the sensed sequence of signals over time further using the position and sensing direction (HEILBRON, at least one para. 0161; “processing unit 110 may consider the position and motion of other vehicles, the detected road edges and barriers, and/or general road shape descriptions extracted from map data (such as data from map database 160).”); and/or
The combination of Bernhardt, Bulan, and HEILBRON are considered to be analogous to the claimed invention because both of them are in the same field as determining traffic flow information as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the processing means of Bernhardt with teaching of HEILBRON. One of the ordinary skill in the art would have been motivated to make this modification so that the processing unit is able to provide redundancy for detecting road marks and lane geometry and increase reliability of the system (HEILBRON, at least one para. 0161).
Regarding claim 27, The combination of Bernhardt, Bulan, and HEILBRON teaches the limitations of claim 23, upon which the instant claim depends, as discussed supra. Further, HEILBRON teaches (Currently Amended) The system of claim 23, wherein the external data comprises a map comprising the at least one lane, and the processing means is configured to determine the at least one lane by adjusting the map to the sensed sequence of signals over time using the position and sensing direction of the sensing means (HEILBRON, at least one para. 0161; “processing unit 110 may consider the position and motion of other vehicles, the detected road edges and barriers, and/or general road shape descriptions extracted from map data (such as data from map database 160).”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UPUL P CHANDRASIRI whose telephone number is (703)756-5823. The examiner can normally be reached M-F 8.30 am to 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at 571-272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/U.P.C./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665