Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed of EP Application No. 23 218 534.8, filed on 12/20/2023.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/03/2024 is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (i.e., a mental process) without significantly more.
Claim 9. A method for a driver assistance system, comprising:
receiving a first user input from an occupant of a vehicle [insignificant extra-solution activity i.e. data collection];
in response to the first user input, obtaining one or more images of a surrounding environment of the vehicle captured by an external camera unit [insignificant extra-solution activity i.e. data collection];
based on the first user input, identifying one or more objects in the one or more images [mental process/step];
annotating the one or more images to highlight the one or more objects [mental process/step];
displaying at least one of the one or more annotated images by means of a display unit [insignificant extra-solution activity i.e. displaying information];
receiving a second user input from the occupant of the vehicle, the second user input including a selection of at least one of the one or more highlighted objects [insignificant extra-solution activity i.e. data collection]; and
causing an action to be performed in response to the second user input [insignificant extra-solution activity i.e. displaying information].
101 Analysis – Step 1: Statutory Category – Yes
Claim 9 is directed to a method. Therefore, claim 9 is within at least one of the four statutory categories.
Step 2A, Prong one evaluation: Judicial Exception – Yes – Mental Processes
In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity.
The examiner submits that the limitations
based on the first user input, identifying one or more objects in the one or more images;
annotating the one or more images to highlight the one or more objects;
constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. For example,
A person can visually identify a bridge in an image based on the user saying “Do you see the bridge in front of us?”.
The person can then use a pen to draw a box around the bridge on the image to highlight it.
Step 2A Prong two evaluation: Practical Application – No
In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”.
In the present case, the examiner submits that the foregoing underlined limitations recite additional elements that do not integrate the recited judicial exception into a practical application.
Regarding the additional limitations of “receiving a first user input from an occupant of a vehicle”, “in response to the first user input, obtaining one or more images of a surrounding environment of the vehicle captured by an external camera unit”, and “receiving a second user input from the occupant of the vehicle, the second user input including a selection of at least one of the one or more highlighted objects”, the examiner submits that these limitations are mere data gathering. The receiving and obtaining steps are recited at a high level of generality (i.e. as a general means of gathering user input or images for use in the annotating and identifying steps), and amounts to mere data gathering, which is a form of insignificant extra-solution activity.
Regarding the additional limitations of “causing an action to be performed in response to the second user input” and “displaying at least one of the one or more annotated images by means of a display unit”, the examiner submits that these limitations are recited at a high level of generality and are merely displaying information which is a form of insignificant extra-solution activity.
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limitation on practicing the abstract idea.
Step 2B evaluation: Inventive concept – No
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere data gathering and displaying information which are considered insignificant extra-solution activities. The same analysis applies here in 2B, i.e., insignificant extra-solution activities cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, the claim 9 is ineligible.
Independent apparatus claims 1 and 18 recite similar limitations performed in the method of claim 9. Therefore, claims 1 and 18 are rejected under the same rationales used in the rejection of claim 9 as outlined above.
Dependent claims 2-8, 10-17, and 19-20 do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”.
Therefore, dependent claims 2-8, 10-17, and 19-20 are not patent eligible under the same rationale as provided for in the rejection of independent claim 9. Therefore, claims 1-20 are ineligible under 35 USC § 101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3 and 6-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mohan (US PGPub 2020/0073520).
Regarding claim 1, Mohan discloses A driver assistance system of a vehicle, comprising:
a processing unit [Mohan ¶ 0038 "The microprocessor 202a may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to execute a set of instructions stored in the memory 204."]; and
non-transitory memory storing instructions that when executed cause the processing unit to [Mohan ¶ 0038 "The microprocessor 202a may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to execute a set of instructions stored in the memory 204."]:
receive a first user input from an occupant of the vehicle [Mohan ¶ 0017 "The disclosed media display system may include a first image sensor, inside the vehicle, to determine user information, such as a head position/an eye gaze of an occupant of the vehicle, to determine a direction-of-view of the occupant."];
in response to the first user input, obtain one or more images captured by one or more cameras of an external camera unit [Mohan ¶ 0017 "The media display system may further include a second image sensor, outside the vehicle, to capture a portion of a view surrounding the vehicle based on the determined direction-of-view of the occupant."];
identify, within the one or more images, one or more objects corresponding to the first user input [Mohan ¶ 0017 "The media display system further controls display of the captured portion of the view surrounding the vehicle on a display medium. Each image frame of a set of image frames in the displayed portion of the view surrounding the vehicle may include one or more objects-of-interest of the occupant."]; and
take one or more actions based on the one or more objects and the first user input [Mohan ¶ 0017 "The disclosed media display system may further receive a real-time or near-real time video feed from the content source to provide the AR/VR shopping experience to the occupant while travelling."].
Regarding claim 2, Mohan discloses The driver assistance system of claim 1, wherein the memory stores further instructions that when executed cause the processing unit to:
display the one or more images on a display device [Mohan ¶ 0017 "The media display system further controls display of the captured portion of the view surrounding the vehicle on a display medium. Each image frame of a set of image frames in the displayed portion of the view surrounding the vehicle may include one or more objects-of-interest of the occupant."]; and
receive second user input selecting at least one object of the one or more objects [Mohan ¶ 0017 "The disclosed media display system may receive a selection of desired object-of-interest from the occupant and may further recognize the selected object-of-interest and communicate with a content source related to the selected object-of-interest."], wherein the one or more actions are taken based on the selected at least one object of the one or more objects [Mohan ¶ 0017 "The disclosed media display system may further receive a real-time or near-real time video feed from the content source to provide the AR/VR shopping experience to the occupant while travelling."].
Regarding claim 3, Mohan discloses The driver assistance system of claim 2, wherein the selected at least one object is a place or object of interest, and wherein the action comprises at least one of:
providing information about the selected at least one object to the occupant of the vehicle [Mohan ¶ 0017 "The disclosed media display system may further receive a real-time or near-real time video feed from the content source to provide the AR/VR shopping experience to the occupant while travelling."], and
adding a location of the selected at least one object as a destination in a navigation system of the vehicle.
Regarding claim 7, Mohan discloses The driver assistance system of claim 1, wherein the non-transitory memory stores further instructions that when executed cause the processing unit to:
identify the one or more images from a larger set of images based on one or more secondary inputs relating to the first user input, wherein the one or more secondary inputs include at least one of:
eye-gaze tracking information obtained by interior vehicle cameras, the eye-gaze tracking information indicating a region in which the one or more objects are located; and
region information included in the first user input [Mohan ¶ 0075 "The first set of image frames 502 may include the second view surrounding the vehicle 102 which may be captured by the second image sensor 112 in the determined direction-of-view of the occupant 104."].
Regarding claim 8, Mohan discloses The driver assistance system of claim 1, wherein to obtain the one or more images, the processing unit is configured to cause the one or more cameras of the external camera unit to capture new images specific to the first user input [Mohan ¶ 0032 "the media display system 108 may be further configured to control the second image sensor 112 (disposed outside the vehicle 102) to capture a first portion of a second view surrounding the vehicle 102 in the determined direction-of-view of the occupant 104."].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4-5 and 9-17 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan in view of Boykin (US PGPub 2018/0025636).
Regarding claim 4, Mohan as modified by Boykin teaches claim 2. Boykin further teaches wherein the selected at least one object is a vehicle or person, and the action comprises transmitting a message to the selected at least one object [Boykin ¶ 0067 "(v) Activating a hardware function—In addition to or instead of the activation of data recording (iii, above), equipment or systems may be activated depending on the detected content (e.g., automatic activation of the vehicle's light bar if a crash scene is detected). " The message of someone is there to help is indicated by the light bar being activated and is transmitted to the vehicles and people involved in the crash.].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using a first and second user input as taught by Mohan with transmitting a message to said object as taught by Boykin in order to create a more effective and useable system.
Regarding claim 5, Mohan as modified by Boykin teaches claim 2. Boykin further teaches wherein the selected at least one object is a vehicle or person, and the action comprises at least one of:
storing one or more images captured by the external camera unit in the non-transitory memory [Boykin ¶ 0065 " (iii) Activating data recording—An onboard storage device may start saving the information being captured by the camera device. Other camera devices in the vehicle can be triggered to start recording. The captured information may also be transmitted concurrently via the communication network to be stored at the police station or another location."];
adjusting one or more vehicle operation states; and
transmitting a message to an appropriate authority [Boykin ¶ 0063 "(i) Providing an alert—The officer may be given a visual and/or audible alert on his vehicle display that a positive match has been detected. An alert may also be sent via the communication network to the police station or other locations"].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using a first and second user input as taught by Mohan with transmitting a message to an authority or storing images as taught by Boykin in order to create a more effective and useable system.
Regarding claim 9, Mohan teaches A method for a driver assistance system, comprising:
receiving a first user input from an occupant of a vehicle [Mohan ¶ 0017 "The disclosed media display system may include a first image sensor, inside the vehicle, to determine user information, such as a head position/an eye gaze of an occupant of the vehicle, to determine a direction-of-view of the occupant."];
in response to the first user input, obtaining one or more images of a surrounding environment of the vehicle captured by an external camera unit [Mohan ¶ 0017 "The media display system may further include a second image sensor, outside the vehicle, to capture a portion of a view surrounding the vehicle based on the determined direction-of-view of the occupant."];
receiving a second user input from the occupant of the vehicle, the second user input including a selection of at least one of the one or more highlighted objects [Mohan ¶ 0017 "The disclosed media display system may receive a selection of desired object-of-interest from the occupant and may further recognize the selected object-of-interest and communicate with a content source related to the selected object-of-interest."]; and
causing an action to be performed in response to the second user input [Mohan ¶ 0017 "The disclosed media display system may further receive a real-time or near-real time video feed from the content source to provide the AR/VR shopping experience to the occupant while travelling."].
Mohan does not teach based on the first user input, identifying one or more objects in the one or more images, annotating the one or more images to highlight the one or more objects, and displaying at least one of the one or more annotated images by means of a display unit.
However, in a related field of invention, Boykin does teach
based on the first user input, identifying one or more objects in the one or more images [Boykin ¶ 0071 "The collection and processing of image data may be stopped and started as desired by a user (e.g. an officer in the vehicle) entering a command (e.g., a voice command or a command that is typed/keyed/entered by touchpad) or pushing a button on the computer 12 or on a BWC 29, or by another incoming communication (e.g. from the alert dispatching source) instructing the computer 12 to cancel or start/resume the particular search/analysis."];
annotating the one or more images to highlight the one or more objects [Boykin ¶ 0064 "[0064] (ii) Displaying an image—A video or still image of the detected content may be displayed on the vehicle display. A snapshot (such as video frame 20, 30 or 40) can be displayed, highlighting or putting a bounding box (such as 21, 31, or 41) around the object detected, "];
displaying at least one of the one or more annotated images by means of a display unit [Boykin ¶ 0064 "[0064] (ii) Displaying an image—A video or still image of the detected content may be displayed on the vehicle display. A snapshot (such as video frame 20, 30 or 40) can be displayed, highlighting or putting a bounding box (such as 21, 31, or 41) around the object detected, "];
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using a first and second user input as taught by Mohan with displaying annotated images as taught by Boykin in order to create a more effective and useable system for identifying objects of interest.
Regarding claim 10, Mohan as modified by Boykin teaches claim 9. Mohan further teaches wherein the selected at least one object is a place or object of interest, and wherein the action comprises at least one of:
providing information about the selected at least one object to the occupant of the vehicle [Mohan ¶ 0017 "The disclosed media display system may further receive a real-time or near-real time video feed from the content source to provide the AR/VR shopping experience to the occupant while travelling."], and
adding a location of the selected at least one object as a destination in a navigation system of the vehicle.
Regarding claim 11, Mohan as modified by Boykin teaches claim 9. Boykin further teaches wherein the selected at least one object is a vehicle or person, and the action comprises transmitting a message to the selected at least one object [Boykin ¶ 0067 "(v) Activating a hardware function—In addition to or instead of the activation of data recording (iii, above), equipment or systems may be activated depending on the detected content (e.g., automatic activation of the vehicle's light bar if a crash scene is detected). " The message of someone is there to help is indicated by the light bar being activated and is transmitted to the vehicles and people involved in the crash.].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using a first and second user input as taught by Mohan with transmitting a message to said object as taught by Boykin in order to create a more effective and useable system for identifying objects of interest.
Regarding claim 12, Mohan as modified by Boykin teaches claim 9. Boykin further teaches wherein the selected at least one object is a vehicle or person, and the action comprises at least one of:
storing one or more images captured by the external camera unit in non-transitory memory [Boykin ¶ 0065 " (iii) Activating data recording—An onboard storage device may start saving the information being captured by the camera device. Other camera devices in the vehicle can be triggered to start recording. The captured information may also be transmitted concurrently via the communication network to be stored at the police station or another location."]; and
transmitting a message to an appropriate authority [Boykin ¶ 0063 "(i) Providing an alert—The officer may be given a visual and/or audible alert on his vehicle display that a positive match has been detected. An alert may also be sent via the communication network to the police station or other locations"].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using a first and second user input as taught by Mohan with transmitting a message or storing images as taught by Boykin in order to create a more effective and useable system for identifying objects of interest.
Regarding claim 13, Mohan as modified by Boykin teaches claim 9. Boykin further teaches herein annotating the one or more images to highlight the one or more identified objects in the displayed one or more images comprises at least one of
framing each object of the one or more identified objects by means of a frame [Boykin ¶ 0064 "[0064] (ii) Displaying an image—A video or still image of the detected content may be displayed on the vehicle display. A snapshot (such as video frame 20, 30 or 40) can be displayed, highlighting or putting a bounding box (such as 21, 31, or 41) around the object detected, "];
marking each object of the one or more identified objects by means of a specific color; and
marking each object of the one or more identified objects by means of a number, letter or symbol.
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using a first and second user input as taught by Mohan with displaying an annotated image as taught by Boykin in order to create a more effective and useable system for identifying objects of interest.
Regarding claim 14, Mohan as modified by Boykin teaches claim 9. Mohan further teaches wherein obtaining the one or more images captured by the external camera unit comprises capturing new images with one or more cameras of the external camera unit based on the first user input [Mohan ¶ 0032 "the media display system 108 may be further configured to control the second image sensor 112 (disposed outside the vehicle 102) to capture a first portion of a second view surrounding the vehicle 102 in the determined direction-of-view of the occupant 104."].
Regarding claim 15, Mohan as modified by Boykin teaches claim 9. Boykin further teaches wherein obtaining the one or more images captured by the external camera unit comprises obtaining, from transitory memory, the one or more images captured by one or more cameras of the external camera unit based on the first user input [Boykin ¶ 0078 " The server 26 may also be configured to analyze the data stored in memory and/or captured by the camera device(s) 16 according to criteria established by an authorized user 27 having access to the server 26 via the communication network 18 (e.g. in response to a BOLO alert)."].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using a first and second user input as taught by Mohan with obtaining an image from memory as taught by Boykin in order to create a more effective and useable system for identifying objects of interest.
Regarding claim 16, Mohan as modified by Boykin teaches claim 15. Mohan further teaches wherein the transitory memory is configured to store images obtained by the external camera unit for a buffer period of time, wherein the buffer period of time is adjustable based on a speed of the vehicle [Mohan ¶ 0076 "The sliding window buffer 204a may be configured to store a predetermined number of image frames. In accordance with an embodiment, the microprocessor 202a may be configured to determine the predetermined number based on the speed of the vehicle 102 and a storage capacity of the sliding window buffer 204a."].
Regarding claim 17, Mohan as modified by Boykin teaches claim 9. Mohan further teaches wherein obtaining the one or more images captured by the external camera unit comprises filtering images obtained by the external camera unit to identify a subset of images relevant to the first user input [Mohan ¶ 0056 "In accordance with an embodiment, the microprocessor 202a may be further configured to generate a timeline (for example a sliding window) of the captured first portion of the first surrounding view 302a outside the vehicle 102 in the direction-of-view of the occupant 104."].
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Boykin in view of Mohan.
Regarding claim 18, Boykin teaches A system in a vehicle, comprising:
a processing unit [Boykin ¶ 0038 "As used throughout this disclosure the term “computer” encompasses special purpose microprocessor-based devices such as a digital video surveillance system"]; and
non-transitory memory storing instructions that when executed cause the processing unit to [Boykin ¶ 0038 "In general, memory or a storage device/drive represents a medium accessible by a computer (via wired or wireless connection) to store data and computer program instructions."]:
receive a voice prompt from an occupant of the vehicle via a voice processing unit [Boykin ¶ 0071 "The collection and processing of image data may be stopped and started as desired by a user (e.g. an officer in the vehicle) entering a command (e.g., a voice command or a command that is typed/keyed/entered by touchpad) or pushing a button on the computer 12 or on a BWC 29, or by another incoming communication (e.g. from the alert dispatching source) instructing the computer 12 to cancel or start/resume the particular search/analysis."];
in response to the voice prompt, obtain one or more images captured by one or more external vehicle cameras [Boykin ¶ 0050 "The vehicle 10 is equipped with one or more camera devices 16 to capture image data from the real world. " and ¶ 0071 "At module 54, the analysis entails a data feed 56 of the image data from the linked camera device(s) 16 to the computer 12 microprocessor. The data feed 56 from the camera device(s) 16 to the computer 12 may be wireless or via cabling (e.g. using a wired onboard vehicle camera)."];
identify one or more objects relating to the voice prompt within the one or more images [Boykin ¶ 0053 "The computer 12 microprocessor is configured to search the captured image data for the presence of the designated content according to the received alert or communication." and 0071 "At module 60, the analysis continues with a scan of the image data captured by the camera device(s) 16 to detect for the presence of the designated content."];
display at least one of the one or more images on a display device [Boykin ¶ 0064 "(ii) Displaying an image—A video or still image of the detected content may be displayed on the vehicle display. A snapshot (such as video frame 20, 30 or 40) can be displayed, highlighting or putting a bounding box (such as 21, 31, or 41) around the object detected, and its movements can be tracked on the display in real-time or after the fact."];
take an action in response to the user selection based on the voice prompt and the selected at least one of the one or more objects [Boykin ¶ 0077 "For example, a user can select or draw an area on a map to display vehicles in a given region, along with their associated data such as specific location data/time/number of recorded events/event type/duration, license plate data, vehicle type, shape, color etc. If an event or specific data is of interest, the user can select an option to send a request to any or all vehicle computers 12 to scan their storage drives, that are continuously recording, for the desired information and send back a response with the search results or to retrieve the designated data with time markers of start and stop points to export video, snapshots, or metadata."].
Boykin does not teach receive user selection of at least one of the one or more objects.
However, in a related field of invention, Mohan does teach receive user selection of at least one of the one or more objects [Mohan ¶ 0017 "The disclosed media display system may receive a selection of desired object-of-interest from the occupant and may further recognize the selected object-of-interest and communicate with a content source related to the selected object-of-interest."].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine the object identification system using artificial intelligence as taught by Boykin with using a second user input as a human in the loop system as taught by Mohan in order to create a more effective and useable system for identifying objects of interest.
Regarding claim 19, Boykin as modified by Mohan teaches claim 18. Boykin further teaches wherein the action includes at least one of adjusting one or more vehicle operation states; transmitting one or more messages; and obtaining and displaying information regarding the selected at least one of the one or more objects [Boykin ¶ 0077 "For example, a user can select or draw an area on a map to display vehicles in a given region, along with their associated data such as specific location data/time/number of recorded events/event type/duration, license plate data, vehicle type, shape, color etc. If an event or specific data is of interest, the user can select an option to send a request to any or all vehicle computers 12 to scan their storage drives, that are continuously recording, for the desired information and send back a response with the search results or to retrieve the designated data with time markers of start and stop points to export video, snapshots, or metadata."].
Regarding claim 20, Boykin as modified by Mohan teaches claim 19. Boykin further teaches wherein:
adjusting the one or more vehicle operation states comprises adjusting one or more settings of an advanced driver assistance system (ADAS);
transmitting one or more messages comprises transmitting messages to at least one of the selected at least one of the one or more objects, other vehicles in a vicinity of the vehicle, and emergency services [Boykin ¶ 0077 "For example, a user can select or draw an area on a map to display vehicles in a given region, along with their associated data such as specific location data/time/number of recorded events/event type/duration, license plate data, vehicle type, shape, color etc. If an event or specific data is of interest, the user can select an option to send a request to any or all vehicle computers 12 to scan their storage drives, that are continuously recording, for the desired information and send back a response with the search results or to retrieve the designated data with time markers of start and stop points to export video, snapshots, or metadata."]; and
obtaining the information regarding the selected at least one of the one or more objects comprises accessing one of a database and a cloud.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPHINE RICH whose telephone number is (571)272-6384. The examiner can normally be reached M-F 8-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Browne can be reached at (571) 270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.E.R./Examiner, Art Unit 3666
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666