DETAILED ACTION
This action is in response to the original filing on 02/09/2023. Claims 1-15 are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Objections
Claims 2, 5, 6, 8, 11, and 15 are objected to because of the following informalities:
Claim 2 recites ‘.-controlling the machine learning unit’; however, it should recite - - controlling the machine learning unit - -.
Claim 2 recites ‘the display of the user interface’; however, it should recite - - a display of the user interface - -.
Claims 2 and 11 recite ‘a user interface’; however, they should recite - - the user interface - -.
Claims 2, 5, and 6 recite ‘the user interface unit’; however, they should recite - - a user interface unit - -.
Claims 6 and 8 do not end with a period.
Claim 11 has a space before the period.
Claim 15 recites ‘the method for correct item identification’; however, it should recite - - a method for correct item identification - -.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, claim 1 recites “transmit the data and information related to the data”. It is unclear how this limitation is intended to relate the previously recited sensor data. For the purposes of examination, limitations related to “the data” in the claims are interpreted as: first data
Claim 1 further recites “optionally controlling the machine learning unit to transmit the data and information related to the data to a user interface configured to provide a representation of the data and instructions for a user to label the data, based on the information related to the data”. It is unclear how “optionally is intended to be interpreted. It is unclear whether only the “controlling” is optional or if all limitations are optional. It is unclear whether “instructions for a user to label the data” is provided by the user interface. It is unclear as to whether “based on the information related to the data” is intended to modify the transmitting, the providing, the representation, the instructions, or the data. For the purposes of examination, this limitation is interpreted as: controlling the machine learning unit to transmit the data and information related to the data to a user interface, wherein the user interface is configured to provide a representation of the data and the user interface is further configured to provide instructions for a user to label the data, wherein the controlling is based on the information related to the data
Claim 1 further recites “controlling the machine learning unit to re-train a machine learning algorithm stored in the memory based on the data and optionally labelling information”. It is unclear whether “based on the data” is intended to modify the controlling, the retraining, or the memory. It is unclear how the “optionally labeling information” is intended to refer to the previously recited labeling information. It is unclear how the labeling information is optionally. It is further unclear whether “optionally labelling information” is intended to modify the controlling, the retraining, or the memory. For the purposes of examination, this limitation is interpreted as: controlling the machine learning unit to re-train a machine learning algorithm stored in the memory, wherein the controlling is based on the data or the labelling information
Regarding claims 14 and 15, claims 14 and 15 contain substantially similar limitations to those found in claim 1. Consequently, claims 14 and 15 are rejected for the same reasons.
Regarding claims 2-13, claims 2-13 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for depending on an indefinite parent claim.
Regarding claim 2, claim 2 recites “controlling the user interface unit to display, on the display of the user interface the data and instructions to label the data, based on the information related to the data”. It is unclear which previously limitation “based on the information related to the data is intended to modify. For the purposes of examination, this limitation is interpreted controlling the user interface unit to display, on the display of the user interface the data and instructions to label the data, wherein the controlling is based on the information related to the data
Regarding claim 4, the term “frequently” in claim 4 is a relative term which renders the claim indefinite. The term “frequently” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For the purposes of examination, “a frequently visited location” is interpreted as: a visited location
Regarding claims 4 and 14, the phrase "preferably" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d)
Regarding claim 7, claim 7 recites “based on one or more of the following: a present date or on a time period passed from a last date on which training of the machine learning algorithm has been performed; a regular time interval; an identity of the user and a triggering event”. The relationship between the alternative limitations is unclear. It is unclear how “a present date or on a time” and “an identity of the user and a triggering event” are grouped with the other alternative limitations. For the purposes of examination, this limitation is interpreted as: based on one or more of the following: a present date, a time period passed from a last date on which training of the machine learning algorithm has been performed, a regular time interval, ,an identity of the user, or a triggering event
Regarding claim 11, claim 11 recites “the user interface, to which the pre-processed data and information related to the pre-processed data are transmitted and to which the first user input is entered”. The claim does not previously recite a user interface to which the pre-processed data and information related to the pre-processed data are transmitted and to which the first user input is entered. The claim does not previously recite pre-processed data. The claim does not previously recite first user input. For the purposes of examination, this limitation is interpreted as: a second user interface, wherein pre-processed data and information related to the pre-processed data are transmitted to the second user interface and first user input is entered in the second user interface
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Regarding claim 15, claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an algorithm and appears to be comprised of software alone. For example, claim 15 recites a machine learning algorithm trained in accordance with the method for correct item identification. According to MPEP 2111, examiner is obliged to give the terms or phrases their broadest interpretation definition awarded by one of an ordinary skill in the art unless applicant has provided some indication of the definition of the claimed terms or phrases. Therefore, examiner interprets these sections as any entity that capable of performing the functions as recited, which includes software modules. Thus claim 15 is directed to a software system which is directed to non-statutory subject matter. See MPEP § 2106.01.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 14, and 15
Step 1: Claims 1, 14, and 15 recite a method, a system, and an algorithm. Claims 1 and 14 are directed to the statutory categories of a method and a machine. As discussed above, claim 15 is directed to non-statutory subject matter; however, it will be addressed here in the interest of compact prosecution.
Step 2A Prong 1: The claims recite, inter alia:
optionally controlling the machine learning unit to transmit the data and information related to the data to a user interface configured to provide a representation of the data and instructions for a user to label the data, based on the information related to the data,; assigning labelling information to the data; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining, based on information, to request a label and assigning the label to the data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of “A method for training a machine learning algorithm for correct item identification performed by a computing unit of a vehicle, wherein the computing unit comprises a processor, a machine learning unit, a communication unit, and a memory, and wherein the vehicle comprises a sensor, the method comprising the steps of”, “A computing unit, wherein the computing unit is preferably part of a vehicle, and the computing unit performs a method for training a machine learning algorithm for correct item identification performed by a computing unit of a vehicle, wherein the computing unit comprises a processor, a machine learning unit, a communication unit, and a memory, and wherein the vehicle comprises a sensor, the method comprising the steps of”, “A machine learning algorithm trained in accordance with the method for correct item identification performed by a computing unit of a vehicle, wherein the computing unit comprises a processor, a machine learning unit, a communication unit, and a memory, and wherein the vehicle comprises a sensor, the method comprising the steps of”, and “a user interface” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h). The claimed computer components are recited at a high level of generality and are merely invoked as tool to perform the abstract idea. The additional elements of “triggering sensor data collection and controlling the sensor to acquire sensor data“, “transmitting the sensor data to the machine learning unit” amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). The additional element of “controlling the machine learning unit to re-train a machine learning algorithm stored in the memory based on the data and optionally labelling information” amounts to no more than a recitation of the words "apply it" (or an equivalent) or is no more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP 2106.05(f)). Even when viewed in combination, these additional element do not integrate the abstract idea into a practical application and the claims are thus directed to the abstract idea.
Step 2B: The claims do not contain significantly more than the judicial exception. “A method for training a machine learning algorithm for correct item identification performed by a computing unit of a vehicle, wherein the computing unit comprises a processor, a machine learning unit, a communication unit, and a memory, and wherein the vehicle comprises a sensor, the method comprising the steps of”, “A computing unit, wherein the computing unit is preferably part of a vehicle, and the computing unit performs a method for training a machine learning algorithm for correct item identification performed by a computing unit of a vehicle, wherein the computing unit comprises a processor, a machine learning unit, a communication unit, and a memory, and wherein the vehicle comprises a sensor, the method comprising the steps of”, “A machine learning algorithm trained in accordance with the method for correct item identification performed by a computing unit of a vehicle, wherein the computing unit comprises a processor, a machine learning unit, a communication unit, and a memory, and wherein the vehicle comprises a sensor, the method comprising the steps of”, and “a user interface” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The additional elements of “triggering sensor data collection and controlling the sensor to acquire sensor data” amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”). The additional element of “controlling the machine learning unit to re-train a machine learning algorithm stored in the memory based on the data and optionally labelling information” amounts to no more than a recitation of the words "apply it" (or an equivalent) or is no more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP 2106.05(f)). Nothing in the claims provides significantly more than that abstract idea. As such, the claims are ineligible.
Claims 2-13
Step 1: Claims 2-13 recite methods; therefore, they are directed to the statutory category of a method.
Step 2: claims 2-13 merely narrow the previously recited abstract idea limitations. For the reasons described above with respect to claims 1, 14, and 15, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental processes that are practically capable of being performed in the human mind with the assistance of pen and paper and mathematical concepts that are achievable through mathematical computation.
Claim 2 further recites the additional element of “controlling the user interface unit to display, on the display of the user interface the data and instructions to label the data, based on the information related to the data”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining, based on information, to request a label and assigning the label to the data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “controlling the machine learning unit to transmit the data and information related to the data to a user interface” and “wherein the step of assigning labelling information of the data comprises receiving from the user interface labelling information obtained by a first user input entered through the user interface and transferring the labelling information to the machine learning unit” amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). The additional elements of “the user interface unit” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Claim 3 further recites the additional element of “wherein controlling the sensor to transmit sensor data to the machine learning unit comprises controlling the machine learning unit to pre-process the sensor data”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of pre-processing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “transmit sensor data to the machine learning unit” amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). The additional elements of “sensor” and “machine learning unit” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Claim 4 further recites the additional element of “wherein triggering sensor data collection comprises triggering sensor data collection based on location information of the vehicle”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of making a determination based on a location, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “preferably location information indicating a frequently visited location or a location associated with item identification uncertainty above a specific threshold” amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). The additional elements of “vehicle” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Claim 5 further recites the additional elements of “wherein the location information is obtained from a geotracking device included in the vehicle, or from a second user input entered in the user interface unit indicating the location of the vehicle”. These elements amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”).
Claim 6 further recites the additional elements of “wherein triggering sensor data collection comprises triggering sensor data collection based on a command input by the user in the user interface unit”. These elements amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”).
Claim 7 further recites the additional elements of “wherein triggering sensor data collection comprises triggering sensor data collection based on one or more of the following: a present date or on a time period passed from a last date on which training of the machine learning algorithm has been performed; a regular time interval; an identity of the user and a triggering event”. These elements amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”).
Claim 8 further recites the additional elements of “wherein the sensor is a camera”. These elements amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). The additional elements of “the sensor data is an image or a video “ amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)).
Claim 9 further recites the additional elements of “wherein transmitting the sensor data comprises storing the sensor data in the memory”. These elements amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”).
Claim 10 further recites the additional elements of “wherein the user interface is in or a part of the vehicle”. These elements amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Claim 11 further recites the additional elements of “wherein the computing unit comprises a user interface unit and wherein the user interface, to which the pre-processed data and information related to the pre-processed data are transmitted and to which the first user input is entered, is the user interface unit included in the computing unit”. These elements amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Claim 12 further recites the additional element of “wherein pre-processing the data comprises executing the machine learning algorithm stored in the memory to pre-label the sensor data”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of processing data and labeling data. The additional elements of “the machine learning algorithm stored in the memory” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Claim 13 further recites the additional elements of “wherein information related to the data comprises an indication of at least one of a location, a time, an identity of the user using the vehicle, or a type of a format of the pre-processed data”. These elements amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rasmusson et al. (US 20180136000 A1, published 05/17/2018), hereinafter Rasmusson.
Regarding claim 14, Rasmusson teaches the claim comprising:
A computing unit, wherein the computing unit is preferably part of a vehicle, and the computing unit performs a method for training a machine learning algorithm for correct item identification performed by a computing unit of a vehicle, wherein the computing unit comprises a processor, a machine learning unit, a communication unit, and a memory, and wherein the vehicle comprises a sensor, the method comprising the steps of (Rasmusson Figs. 1-9; [0018], the computing device may learn to improve classifications based on crowd-sourced feedback; [0027], the computing device (e.g., autonomous-vehicle UI device or another computing device or combination of computing devices associated with autonomous vehicle 140) may re-classify the object based on secondary data and/or calculate another confidence score for the classification based on secondary data and/or by using a machine-learning model; [0040], the computing device may be autonomous-vehicle UI device 148, may be navigation system 146, or may be any other suitable computing device associated with autonomous vehicle 140; [0058], FIG. 6 illustrates an example method 600 for using secondary data to classify and render an identified object. The method may begin at step 610, where a computing device receives autonomous-vehicle sensor data representing an external environment within a threshold distance of an autonomous vehicle. The computing device may be autonomous-vehicle UI device 148; [0063], the computing device may input the similarity score, the corresponding subset of data points, the corresponding predetermined pattern (or, if autonomous-vehicle sensor data is pre-classified, the classification), and secondary data into a machine-learning model. The machine-learning model may be trained using a training set that includes secondary data as sample data and independently classified objects as desired outputs; [0065], The algorithm used by the machine-learning model may be any suitable algorithm, including a linear regression model, a neural network, Bayesian-based model, or any other suitable type of model; [0072], a computer system may encompass a computing device; software running on one or more computer systems 900 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein; [0072], FIG. 9 illustrates an example computer system 900. In particular embodiments, one or more computer systems 900 perform one or more steps of one or more methods described or illustrated herein; [0073], computer system 900 may include one or more computer systems 900; be unitary; [0074], computer system 900 includes a processor 902, memory 904, storage 906, an input/output (I/O) interface 908, a communication interface 910, and a bus 912):
- triggering sensor data collection and controlling the sensor to acquire sensor data, - transmitting the sensor data to the machine learning unit (Rasmusson Figs. 1-9; [0058], FIG. 6 illustrates an example method 600 for using secondary data to classify and render an identified object. The method may begin at step 610, where a computing device receives autonomous-vehicle sensor data representing an external environment within a threshold distance of an autonomous vehicle. The computing device may be autonomous-vehicle UI device 148; [0060], the received autonomous-vehicle sensor data may contain pre-classified subsets of data points. The pre-classification may have been performed by a processor associated with sensor array 144; [0063], the computing device may input the similarity score, the corresponding subset of data points, the corresponding predetermined pattern (or, if autonomous-vehicle sensor data is pre-classified, the classification), and secondary data into a machine-learning model. The machine-learning model may be trained using a training set that includes secondary data as sample data and independently classified objects as desired outputs; The machine-learning model may output a confidence score CS.sub.i that represents the probability that the predetermined pattern (or, if autonomous-vehicle sensor data is pre-classified, the classification) corresponds to the correct classification for the subset of data points; [0065], The algorithm used by the machine-learning model may be any suitable algorithm, including a linear regression model, a neural network, Bayesian-based model, or any other suitable type of model; [0066], At step 660, the computing device may determine whether the confidence score meets a first condition; [0067], If the above two criteria are true, the method may proceed to step 690, where the computing device classifies the subset with the same classification as the classification for the predetermined pattern; if both conditions are met, the method may then proceed to step 695, where the computing device provides instructions to render an object graphic corresponding to the classification in the situational-awareness view),
- optionally controlling the machine learning unit to transmit the data and information related to the data to a user interface configured to provide a representation of the data and instructions for a user to label the data, based on the information related to the data (Rasmusson Figs. 1-9; [0034], FIGS. 3 and 5 show example situational-awareness views. In a situational-awareness view, graphical representations of objects that exist in the external environment of the autonomous vehicle may be displayed on the display screen of autonomous-vehicle UI device 148; [0057], FIG. 5 illustrates an example user interface for gathering secondary information. In particular embodiments, the secondary data may be user-generated data. If the confidence score for a particular classified object is below the threshold, the autonomous-vehicle UI device may send instructions to present a interaction element in the situational-awareness view on the display screen. The interaction element may allow the user to interact with the situational-awareness view by providing information related to the classified object with a below-threshold confidence score. Additionally, the autonomous-vehicle UI device may render the classified object as a block in the situational-awareness view, or alternatively, as a generic object type (e.g., a generic car) or as a generic blob associated with the object coordinate points that have been received. As an example and not by way of limitation, the autonomous-vehicle UI device may classify an identified object as Car Model A (e.g., HONDA CIVIC), but the data associated with the identified object may also resemble Car Model B (e.g., TOYOTA COROLLA). Instead of making a determination based off of moderately unreliable data, the autonomous-vehicle UI device may provide instructions to render the identified object as block 510 and may also provide instructions to present a text module 520 that requests the user to classify the object. For example, the text module 520 may state: “What is this object?” and may provide two or more options for the user to select.; As more and more users input object classifications for identified objects, the computing device may learn (e.g., via machine-learning techniques) the subtle differences between similar-looking objects. For example, the corpus of user identified objects may be used to train a machine learning model to identify objects based on the received sensor information; [0067], If either of the above two conditions are not true for any of the confidence scores, the method may proceed to step 680. At this step, the computing device may provide instructions to display a prompt on the autonomous-vehicle UI device 148 or client device 130 requesting the passenger to input a classification for the object. As an example and not by way of limitation, the prompt may state, “Jenny, help us learn about our environment. The highlighted object is a (A) a car, (B) a dumpster, or (C) other.”)
- assigning labelling information to the data, and - controlling the machine learning unit to re-train a machine learning algorithm stored in the memory based on the data and optionally labelling information (Rasmusson Figs. 1-9; [0018], the computing device may learn to improve classifications based on crowd-sourced feedback; The computing device may accept the user's input as the classification for the object and provide a corresponding object graphic in the appropriate location in the situational-awareness view. The object graphic may be configured to move as the object moves in a natural manner so that the object appears as natural and life-like as possible. Further, this user-generated classification may be used in the future by the computing device to make more accurate automated classifications; [0027], This disclosure contemplates any suitable computing device to carry out the embodiments described herein. The computing device may be autonomous-vehicle UI device 148; [0057], FIG. 5 illustrates an example user interface for gathering secondary information. In particular embodiments, the secondary data may be user-generated data; As more and more users input object classifications for identified objects, the computing device may learn (e.g., via machine-learning techniques) the subtle differences between similar-looking objects. For example, the corpus of user identified objects may be used to train a machine learning model to identify objects based on the received sensor information; [0058], The method may begin at step 610, where a computing device receives autonomous-vehicle sensor data representing an external environment within a threshold distance of an autonomous vehicle. The computing device may be autonomous-vehicle UI device 148; [0063], The machine-learning model may be trained using a training set that includes secondary data as sample data and independently classified objects as desired outputs; [0065], The algorithm used by the machine-learning model may be any suitable algorithm, including a linear regression model, a neural network, Bayesian-based model, or any other suitable type of model; [0074], In particular embodiments, computer system 900 includes a processor 902, memory 904, storage 906)
Regarding claims 1 and 15, claims 1 and 15 contain substantially similar limitations to those found in claim 14. Consequently, claims 1 and 15 are rejected for the same reasons.
Regarding claim 2, Rasmusson teaches all the limitations of claim 1, further comprising:
- controlling the machine learning unit to transmit the data and information related to the data to a user interface, and - controlling the user interface unit to display, on the display of the user interface the data and instructions to label the data, based on the information related to the data, wherein the step of assigning labelling information of the data comprises receiving from the user interface labelling information obtained by a first user input entered through the user interface and transferring the labelling information to the machine learning unit (Rasmusson Figs. 1-9; [0018], the computing device may learn to improve classifications based on crowd-sourced feedback; The computing device may accept the user's input as the classification for the object and provide a corresponding object graphic in the appropriate location in the situational-awareness view. The object graphic may be configured to move as the object moves in a natural manner so that the object appears as natural and life-like as possible. Further, this user-generated classification may be used in the future by the computing device to make more accurate automated classifications; [0057], FIG. 5 illustrates an example user interface for gathering secondary information. In particular embodiments, the secondary data may be user-generated data. If the confidence score for a particular classified object is below the threshold, the autonomous-vehicle UI device may send instructions to present a interaction element in the situational-awareness view on the display screen. The interaction element may allow the user to interact with the situational-awareness view by providing information related to the classified object with a below-threshold confidence score. Additionally, the autonomous-vehicle UI device may render the classified object as a block in the situational-awareness view, or alternatively, as a generic object type (e.g., a generic car) or as a generic blob associated with the object coordinate points that have been received. As an example and not by way of limitation, the autonomous-vehicle UI device may classify an identified object as Car Model A (e.g., HONDA CIVIC), but the data associated with the identified object may also resemble Car Model B (e.g., TOYOTA COROLLA). Instead of making a determination based off of moderately unreliable data, the autonomous-vehicle UI device may provide instructions to render the identified object as block 510 and may also provide instructions to present a text module 520 that requests the user to classify the object. For example, the text module 520 may state: “What is this object?” and may provide two or more options for the user to select.; As more and more users input object classifications for identified objects, the computing device may learn (e.g., via machine-learning techniques) the subtle differences between similar-looking objects. For example, the corpus of user identified objects may be used to train a machine learning model to identify objects based on the received sensor information; [0063], The machine-learning model may be trained using a training set that includes secondary data as sample data and independently classified objects as desired outputs; [0067], the prompt may state, “Jenny, help us learn about our environment. The highlighted object is a (A) a car, (B) a dumpster, or (C) other.”; see also [0034])
Regarding claim 3, Rasmusson teaches all the limitations of claim 1, further comprising:
wherein controlling the sensor to transmit sensor data to the machine learning unit comprises controlling the machine learning unit to pre-process the sensor data (Rasmusson Figs. 1-9; Rasmusson Figs. 1-9; [0058], FIG. 6 illustrates an example method 600 for using secondary data to classify and render an identified object. The method may begin at step 610, where a computing device receives autonomous-vehicle sensor data representing an external environment within a threshold distance of an autonomous vehicle. The computing device may be autonomous-vehicle UI device 148; [0059], At step 620, the computing device may identify, from the autonomous-vehicle sensor data, one or more subsets of data points that each correspond to one or more objects surrounding the vehicle; [0060], the received autonomous-vehicle sensor data may contain pre-classified subsets of data points. The pre-classification may have been performed by a processor associated with sensor array 144; [0062], At step 640, the computing device may calculate a similarity score SS.sub.i for each predetermined pattern based on how similar the respective predetermined pattern is to the subset of data points; [0063], the computing device may input the similarity score, the corresponding subset of data points, the corresponding predetermined pattern (or, if autonomous-vehicle sensor data is pre-classified, the classification), and secondary data into a machine-learning model. The machine-learning model may be trained using a training set that includes secondary data as sample data and independently classified objects as desired outputs; The machine-learning model may output a confidence score CS.sub.i that represents the probability that the predetermined pattern (or, if autonomous-vehicle sensor data is pre-classified, the classification) corresponds to the correct classification for the subset of data points; see also [0065])
Regarding claim 4, Rasmusson teaches all the limitations of claim 1, further comprising:
wherein triggering sensor data collection comprises triggering sensor data collection based on location information of the vehicle, preferably location information indicating a frequently visited location or a location associated with item identification uncertainty above a specific threshold (Rasmusson Figs. 1-9; Rasmusson Figs. 1-9; [0022], data stores may be used to store various types of information, including secondary data such as map data, historical data (e.g., data gathered from past rides); The next time the autonomous vehicle approaches this particular intersection, instead of re-processing the autonomous-vehicle sensor data and re-classifying the objects in the intersection, the computing device may access the historical data from the data stores; [0025], secondary data may include, for example, IMU data, GPS data, historical data (e.g., past classifications made during previous rides along the same or similar roads); [0035], In particular embodiments, autonomous-vehicle UI device 148 may have an interactive touchscreen display; [0044], To be able to make a better estimate of what the subset of data points represents in the real world, the computing device may take secondary data, along with the subset of data points, as input to a machine-learning model. To continue the above example of the subset of data points that resembles both a motorcycle and a car, the secondary data may include two categories of data: GPS data, and weather data. The GPS data may indicate that the subset of data points was captured in Buffalo, N.Y.; [0054], In particular embodiments, the secondary data may be historical data. Historical data may include previous identifications and classifications of objects along a particular route. When providing rides to requestors 101, dynamic transportation matching system 160 may store the identified and classified objects along a route. As an example and not by way of limitation, the computing device may access information related to rides traveling from San Francisco International Airport (SFO) to Palo Alto, Calif. This information may include the objects that have been identified and classified in previous rides by autonomous vehicles from SFO to Palo Alto. The computing device (e.g., autonomous-vehicle UI device or any other suitable computing device or combination of computing devices) may load at least some of the object graphics that correspond to previously identified and classified objects along the route from SFO to Palo Alto. As the autonomous vehicle 140 navigates along the route, the autonomous-vehicle UI device may display the object graphics in the situational-awareness view; Computing resources can be devoted to identifying and classifying moving objects on the road rather than stationary objects like billboards and buildings; [0057], FIG. 5 illustrates an example user interface for gathering secondary information. In particular embodiments, the secondary data may be user-generated data. If the confidence score for a particular classified object is below the threshold, the autonomous-vehicle UI device may send instructions to present a interaction element in the situational-awareness view on the display screen; [0062], At step 640, the computing device may calculate a similarity score SS.sub.i for each predetermined pattern based on how similar the respective predetermined pattern is to the subset of data points; see also [0058-0059], [0065])
Regarding claim 5, Rasmusson teaches all the limitations of claim 4, further comprising:
wherein the location information is obtained from a geotracking device included in the vehicle, or from a second user input entered in the user interface unit indicating the location of the vehicle (Rasmusson Figs. 1-9; Rasmusson Figs. 1-9; [0022], data stores may be used to store various types of information, including secondary data such as map data, historical data (e.g., data gathered from past rides); [0025], secondary data may include, for example, IMU data, GPS data, historical data (e.g., past classifications made during previous rides along the same or similar roads); [0035], In particular embodiments, autonomous-vehicle UI device 148 may have an interactive touchscreen display; [0044], To be able to make a better estimate of what the subset of data points represents in the real world, the computing device may take secondary data, along with the subset of data points, as input to a machine-learning model. To continue the above example of the subset of data points that resembles both a motorcycle and a car, the secondary data may include two categories of data: GPS data, and weather data. The GPS data may indicate that the subset of data points was captured in Buffalo, N.Y.; [0054], In particular embodiments, the secondary data may be historical data. Historical data may include previous identifications and classifications of objects along a particular route. When providing rides to requestors 101, dynamic transportation matching system 160 may store the identified and classified objects along a route. As an example and not by way of limitation, the computing device may access information related to rides traveling from San Francisco International Airport (SFO) to Palo Alto, Calif. This information may include the objects that have been identified and classified in previous rides by autonomous vehicles from SFO to Palo Alto. The computing device (e.g., autonomous-vehicle UI device or any other suitable computing device or combination of computing devices) may load at least some of the object graphics that correspond to previously identified and classified objects along the route from SFO to Palo Alto. As the autonomous vehicle 140 navigates along the route, the autonomous-vehicle UI device may display the object graphics in the situational-awareness view; Computing resources can be devoted to identifying and classifying moving objects on the road rather than stationary objects like billboards and buildings; [0057], FIG. 5 illustrates an example user interface for gathering secondary information. In particular embodiments, the secondary data may be user-generated data. If the confidence score for a particular classified object is below the threshold, the autonomous-vehicle UI device may send instructions to present a interaction element in the situational-awareness view on the display screen; [0062], At step 640, the computing device may calculate a similarity score SS.sub.i for each predetermined pattern based on how similar the respective predetermined pattern is to the subset of data points; see also [0058-0059], [0065])
Regarding claim 6, Rasmusson teaches all the limitations of claim 1, further comprising:
wherein triggering sensor data collection comprises triggering sensor data collection based on a command input by the user in the user interface unit (Rasmusson Figs. 1-9; [0035], Users 101 of the ride service may interface with autonomous-vehicle 140 by interfacing with autonomous-vehicle UI device 148 to obtain information (e.g. ETA, ride length, current location, nearby attractions), input commands to the autonomous vehicle (e.g. set a new destination, end the current ride, pick up another passenger, view information related to nearby attractions, view payment information), or perform any other suitable interaction; [0050], The user may then be able to interact with the map in any suitable manner (e.g., change a destination, route to the destination, etc.). As another example and not by way of limitation, if the user taps on destination indicator interface element 360, information about the destination may be displayed, such as miles remaining until destination is reached, or estimated time of arrival. The user may be able to set a new destination, see information related to the destination, or view any other suitable information; [0058], FIG. 6 illustrates an example method 600 for using secondary data to classify and render an identified object. The method may begin at step 610, where a computing device receives autonomous-vehicle sensor data representing an external environment within a threshold distance of an autonomous vehicle. The computing device may be autonomous-vehicle UI device 148; see also [0060-0067])
Regarding claim 7, Rasmusson teaches all the limitations of claim 1, further comprising:
wherein triggering sensor data collection comprises triggering sensor data collection based on one or more of the following: a present date or on a time period passed from a last date on which training of the machine learning algorithm has been performed; a regular time interval; an identity of the user and a triggering event (Rasmusson Figs. 1-9; [0035], Users 101 of the ride service may interface with autonomous-vehicle 140 by interfacing with autonomous-vehicle UI device 148 to obtain information (e.g. ETA, ride length, current location, nearby attractions), input commands to the autonomous vehicle (e.g. set a new destination, end the current ride, pick up another passenger, view information related to nearby attractions, view payment information), or perform any other suitable interaction; [0036], A privacy setting of a user may determine what information associated with the user may be logged, how information associated with the user may be logged, when information associated with the user may be logged, who may log information associated with the user, whom information associated with the user may be shared with, and for what purposes information associated with the user may be logged or shared. Authorization servers may be used to enforce one