DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/5/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 10 and 19 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The limitation “wherein the vehicle is an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle” captures all types of autonomous to non-autonomous vehicle capturing all vehicle levels of automation so there is no further limitation. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Conforms with 35 USC § 101
The presently examined claims were evaluated for a 101 Alice type rejection. The conclusion from going through the Alice/Mayo test is that the independent claims are integrated into a practical application (or cannot be performed merely with the human mind) and are therefore patent eligible under 35 U.S.C. 101. See MPEP §2106, subsection III and MPEP §2106.04, subsection II(A).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Naderi US 20200336541 A1.
As to claim 1, and included method claim 11, and similar claim 20, Naderi discloses a vehicle, comprising:
a vehicle wireless communication equipment [Naderi: 0082, 0032-0033 “As another example, the sensor fusion and RWM management layer 212 may receive information from vehicle-to-vehicle (V2V) communications (such as via the CAN bus) regarding other vehicle positions and directions of travel, and combine that information with information from the radar perception layer 202 and the camera perception layer 204 to refine the locations and motions of other vehicles.”];
a perception system comprising at least one processor, at least one memory configured to store instructions [Naderi: 0024 #200], and a plurality of sensors positioned, or mounted, on a body of the vehicle [Naderi: #219 sensors, #202 Radar, #204 camera]; and
an indicium for connecting to the vehicle wireless communication equipment, the indicium displayed on the body of the vehicle and the indicium including information to connect with the vehicle wireless communication equipment [Naderi: 0119 “Similarly, when the owner/operator chooses to not make his or her vehicle available for gathering sensor data, the owner/operator may touch or press a button in the graphical user interface to turn sensor data acquisition and transmission “off” The status of the user selection, which may be stored as a state or flag in a register of memory, may indicate whether sensor data acquisition and transmission is or is not authorized at a given time.”], wherein:
the vehicle wireless communication equipment is configured to:
broadcast a network identifier of the vehicle wireless communication equipment [Naderi: 0054 “In some embodiments, the gathered sensor data may be sent to the data agency server in a message including gathered data, an identifier of the vehicle sending gathered data, and a location indication of the vehicle.”];
receive a request [Naderi: #902, 0157-0158], from a third-party client device [Naderi: 0158 “In block 902, the processor of the vehicle may receive a data gathering request. In various embodiments, a processor of a vehicle may receive a data gathering request, such as a data gathering request from a data agency server.”], to connect with the vehicle wireless communication equipment based upon the broadcasted network identifier [Naderi: 0054 “In some embodiments, the gathered sensor data may be sent to the data agency server in a message including gathered data, an identifier of the vehicle sending gathered data, and a location indication of the vehicle.” 0086 “preference to provide sensor data to the data agency server 410, and/or owner/operator identification information.”] or the indicium for connecting to the vehicle wireless communication equipment [Naderi: 0126 “Attributes of collection may enable third party client devices, such as data client servers, and the entities that control those devices to tailor requests for vehicle data.”];
in response to the received request, establish a session with the third-party client device [Naderi: 0135 “Processing may be optional as some data may be sent directly to an address of the data agency server that may make the vehicle sensor data directly available to a third-party entity, such as a data client server. In such a manner, the data agency server may support live streaming of vehicle sensor data to third party entities, such as data client servers.”]; and
receive a request from the third-party client device to receive personalized or custom traffic condition information based upon a specific position identified of a second vehicle associated with the third-party client device [Naderi: 0027 “Data from sensors uses to support autonomous and semi-autonomous vehicle navigation may be useful for other purposes, such as determining traffic volume, identifying road hazards, detecting and imaging potholes in the roadway, imaging scenes adjacent to the roadway, etc.”];
the at least one processor of the perception system is configured to [Naderi: Fig. 3 0093 system-on-chip SOC]:
based upon sensor data of the plurality of sensors of the perception system, generate the personalized or custom traffic condition information corresponding to the specific position of the second vehicle [Naderi: directs guidance to change lane based on traffic. 0084 “As a further example, the sensor fusion and RWM management layer 212 may use dynamic traffic control instructions directing the vehicle 100 to change speed, lane, direction of travel, or other navigational element(s), and combine that information with other received information to determine refined location and state information.” 0085 may be based on C-v2X communications.]; and
initiate transmission of the personalized or custom traffic condition information corresponding the specific position of the second vehicle to the third-party client device via the vehicle wireless communication equipment over the established session [Naderi: directs guidance to change lane based on traffic. 0084 “As a further example, the sensor fusion and RWM management layer 212 may use dynamic traffic control instructions directing the vehicle 100 to change speed, lane, direction of travel, or other navigational element(s), and combine that information with other received information to determine refined location and state information.” 0085 may be based on C-v2X communications.].
As to claim 2 and claim 12, Naderi discloses wherein the specific position of the second vehicle associated with the third-party client device is identified based upon an image transmitted to the third-party client device showing a plurality of vehicles and a respective position of each vehicle of the plurality of vehicles in proximity of the vehicle, the plurality of vehicles and the respective position of each vehicle of the plurality of vehicle is determined by the at least one processor of the perception system based upon the sensor data of the plurality of sensors of the perception system [Naderi: 0069 “if no capable vehicles are currently be at the certain location, the data agency server may select a vehicle with the appropriate camera and ability to move the camera to the certain height that is not currently parked at the location and send a data gathering request to that vehicle. The data gathering request may identify the certain location, the certain direction to park, the camera to use, and the certain height at which to record video. The vehicle may drive to the certain location, park in the certain direction, adjust the height of the camera, gather the video at the certain height, and send the video. The data agency server may make the video available to the data client server.
As to claim 3 and claim 13, Naderi discloses wherein the specific position of the second vehicle associated with the third-party client device is identified based upon a confirmation provided by a user of the third-party client device, the confirmation including a user input of a current speed of the second vehicle [Naderi: Fig. 4B Confirmation 460 provides data 462 and 464].
As to claim 4 and claim 14, Naderi discloses wherein the personalized or custom traffic condition information includes at least one of: (i) a number of vehicles in a specific distance in front of, or behind, the second vehicle in a driving lane of the second vehicle; (ii) an average speed of one or more vehicles in the specific distance in front of, or behind, the second vehicle in the driving lane of the second vehicle; (iii) an average speed of vehicles in one or more driving lanes different from the driving lane of the second vehicle; (iv) a lane switching movement of vehicles in proximity of the second vehicle; or (v) a current speed of each vehicle in proximity of the second vehicle [Naderi: 0036].
As to claim 5 and claim 15, Naderi discloses wherein the personalized or custom traffic condition information is transmitted as a text, an image [Naderi: 0125], a video [Naderi: 0125], or a graphics interchange format (GIF) file.
As to claim 6, Naderi discloses wherein the personalized or custom traffic condition information is transmitted periodically or in response to an event requiring an update to the personalized or custom traffic condition information [Naderi: Fig. 48 #462, #464, #466, #468].
As to claim 7, Naderi discloses wherein the event comprises at least one vehicle changing its respective driving lane or a change in an average speed of vehicles in any driving lane [Naderi: 0084 “As a further example, the sensor fusion and RWM management layer 212 may use dynamic traffic control instructions directing the vehicle 100 to change speed, lane, direction of travel, or other navigational element(s), and combine that information with other received information to determine refined location and state information.”].
As to claim 8 and claim 17, Naderi discloses wherein the plurality of sensors includes one or more cameras, one or more radio detection and ranging (RADAR) sensors [Naderi: #202], one or more light detection and ranging (LiDAR) sensors [Naseri: 0027], one or more microphones [Naseri: 0027], or one or more ultrasound sensors [Naseri: 0027].
As to claim 9 and claim 18, Naderi discloses wherein the indicium for connecting to the wireless communication equipment displayed on the body of the vehicle includes a quick response (QR) code or network identifier [Naderi: 0054, 0086, 0165].
As to claim 10 and claim 19, Naderi discloses wherein the vehicle is an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle [Naderi: 0027].
As to claim 16, Naderi discloses wherein the personalized or custom traffic condition information is transmitted periodically or in response to an event requiring an update to the personalized or custom traffic condition information[Naderi: Fig. 48 #462, #464, #466, #468], the event comprises at least one vehicle changing its respective driving lane or a change in an average speed of vehicles in any driving lane[Naderi: 0084 “As a further example, the sensor fusion and RWM management layer 212 may use dynamic traffic control instructions directing the vehicle 100 to change speed, lane, direction of travel, or other navigational element(s), and combine that information with other received information to determine refined location and state information.”].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20230102760 A1 an apparatus and a method in an autonomous vehicle detect a traffic flow obstruction target. The apparatus detects information about at least one of a speed of another vehicle, a driving path of the other vehicle, or a position of the other vehicle. The apparatus calculates a degree to which the other vehicle interferes with traffic flow, based on the detected information and based on high definition map information stored in a memory. The apparatus selects a traffic flow obstruction target, based on the degree to which the other vehicle interferes with the traffic flow. The apparatus detects a target causing bypass driving, which is present on a driving path, to enhance the continued operation of autonomous driving.
The examiner has pointed out particular references contained in the prior art of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. Applicant should consider the entire prior art as applicable as to the limitations of the claims. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FREDERICK M BRUSHABER whose telephone number is (313)446-4839. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FREDERICK M BRUSHABER/Primary Examiner, Art Unit 3665