DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Amendments received 08-29-2025 have been considered by the examiner.
Claims 1 and 14 have been amended.
Claims 11 and 18 were previously canceled.
No new claims have been introduced.
Claims 1-10, 12-17, and 19-20 are currently pending.
The official correspondence below is an after non-final.
Claim/Art Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 6-8, 10, 12-14, 16-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) and in further view of Zadeh (US 20220121884 A1).
REGARDING CLAIM 1, Kim discloses, a sensor included in an autonomous vehicle (Kim: [ABS]; [0040]; [0047]) and configured to acquire input data of recognition logic for autonomous driving (Kim: [0093]; [0101]; [0104]); a processor configured to determine whether the acquired input data is necessary for the learning of the recognition logic (Kim: [0183]; [0204]), through a pre-learned artificial neural network (ANN)-based learning model (Kim: [0179]; [0183]; [0191]; [0204]); and a storage configured to store input data, which is determined to be necessary for learning of the recognition logic, from among the acquired input data (Kim: [0179]; [0197]; [0204]).
Kim does not explicitly recite the terminology "which is determined to be necessary ...". However, Kim discloses updating AI while leveraging or considering stored training and test data, selecting a small amount for updating a model versus using all data to update a model, which is interpreted as determining "necessary" because not all data is considered.
The examiner respectfully submits that Kim discloses, determine whether the acquired input data is necessary for the learning of the recognition logic (Kim: [0183]; [0204]).
Kim does not explicitly disclose, calculate a vector value through the learning model; calculate one vector in an intermediate stage of a process of calculating a result value through the learning model; evaluate the calculated one vector through a predetermined hyperplane that is a criterion for determination; determine whether the acquired input data is necessary for the learning of the recognition logic, based on the calculated vector value and the predetermined hyperplane in a vector space including the vector value; determine whether the input data is necessary for the learning of the recognition logic, depending on a location of the vector in the intermediate stage based on the hyperplane in the vector space; determine that the input data is necessary for the learning of the recognition logic and determine that the input data is not necessary for the learning of the recognition logic.
However, in the same field of endeavor, Ma discloses, calculate a vector value through the learning model (Ma: [0047] the neural network 200 predicts a drivable portion and/or non-drivable portion in the image data of the environment and the support vector machine determines whether the predicted drivable portion of the environment output by the neural network is classified as drivable based on a hyperplane of the support vector machine; [FIG. 2(230)(235)]); calculate one vector in an intermediate stage of a process of calculating a result value through the learning model (Ma: [0015] implement a neural network and a support vector machine (“SVM”) in a unique configuration that enables online updating … to generate an initial prediction; [0017] the support vector machine having a defined and adjustable hyperplane provides a convenient update ability to the classifier system ... The hyperplane confirms and/or updates predictions of drivable and non-drivable portions of image data of an environment generated by the neural network. Therefore, by adjusting the hyperplane of the support vector machine); evaluate the calculated one vector through a predetermined hyperplane that is a criterion for determination (Ma: [ABS] The support vector machine determines whether the predicted drivable portion of the environment output by the neural network is classified as drivable based on a hyperplane of the support vector machine and output an indication of the drivable portion of the environment; [0017] The hyperplane confirms and/or updates predictions of drivable and non-drivable portions of image data of an environment generated by the neural network. Therefore, by adjusting the hyperplane of the support vector machine; [0019] The gamma parameter of the support vector machine defines how far the influence of a single training example reaches, where low values mean ‘far’ and high values mean ‘close’. In other words, with low gamma, points far away from the plausible hyperplane are considered in calculation for the hyperplane, whereas high gamma means the points close to the plausible hyperplane are considered in calculation; [0033]; [0040] A support vector machine 230 can perform complex data transformations to determine how to separate data based on the labels or outputs defined. The hyperplane 235 defines the separation in the dataset); determine whether the acquired input data is necessary for the learning of the recognition logic (Ma: [0050] In instances where an update is not determined to be needed, for example a “NO” determination is made at block 550, the method may return to block 510 to retrieve or receive a new image data 144. However, in instances wherein an update is determined, for example a “YES” determination is made at block 550, the process advances from block 550 to block 560 ... The annotated environment image data 250 (FIG. 2) generated at block 560 is input into the support vector machine to update the hyperplane (e.g., hyperplane 235, FIG. 2) at block 570. Processing of the annotated environment image data 250 may cause the support vector machine 230 to re-optimize the hyperplane 235. In some embodiments, at block 570 the electronic controller may receive annotated image data of a new environment thereby causing an automatic update of the hyperplane of the support vector machine based on the predicted drivable portions of the environment output by the neural network. That is the annotated image data of a new environment is input to the support vector machine to update the support vector machine online without retraining the neural network; [FIG. 2(230)(235)]), based on the calculated vector value and the predetermined hyperplane in a vector space including the vector value (Ma: [0047] the support vector machine determines whether the predicted drivable portion of the environment output by the neural network is classified as drivable based on a hyperplane of the support vector machine; [0050] The annotated environment image data 250 (FIG. 2) generated at block 560 is input into the support vector machine to update the hyperplane (e.g., hyperplane 235, FIG. 2) at block 570. Processing of the annotated environment image data 250 may cause the support vector machine 230 to re-optimize the hyperplane 235. In some embodiments, at block 570 the electronic controller may receive annotated image data of a new environment thereby causing an automatic update of the hyperplane of the support vector machine based on the predicted drivable portions of the environment output by the neural network. That is the annotated image data of a new environment is input to the support vector machine to update the support vector machine online without retraining the neural network; [FIG. 2(230)(235)]); determine whether the input data is necessary for the learning of the recognition logic (Ma: [0050] In instances where an update is not determined to be needed, for example a “NO” determination is made at block 550, the method may return to block 510 to retrieve or receive a new image data 144. However, in instances wherein an update is determined, for example a “YES” determination is made at block 550, the process advances from block 550 to block 560 ... The annotated environment image data 250 (FIG. 2) generated at block 560 is input into the support vector machine to update the hyperplane (e.g., hyperplane 235, FIG. 2) at block 570. Processing of the annotated environment image data 250 may cause the support vector machine 230 to re-optimize the hyperplane 235. In some embodiments, at block 570 the electronic controller may receive annotated image data of a new environment thereby causing an automatic update of the hyperplane of the support vector machine based on the predicted drivable portions of the environment output by the neural network. That is the annotated image data of a new environment is input to the support vector machine to update the support vector machine online without retraining the neural network; [FIG. 2(230)(235)]), depending on a location of the vector in the intermediate stage (Ma: [0039] a support vector machine 230 may be configured to receive an output from neural network 200 and further predict or determine drivable and/or non-drivable portions of the environment based on a hyperplane 235 of the support vector machine ... as new terrain is encountered, the classifier system (also referred to as the traversability network) can generate new examples of drivable and non-drivable scenarios. These may be added as positive and negative examples to the support vector machine 230 to adjust the parameters of the hyperplane 235, thus adjusting the hyperplane 235 to a more optimal hyperplane accounting for the new and/or variations in previously analyzed datasets; [0050]) based on the hyperplane in the vector space (Ma: [0039-0040] The hyperplane 235 defines the separation in the dataset. The hyperplane is an n−1 dimensional subspace of an n-dimensional Euclidean space. For example, if the dataset is 1D, a single point represents the hyperplane; if the dataset is 2D, the hyperplane is a line; if the dataset is 3D, the hyperplane is a plane; and so on. In some embodiments, one or more hyperplanes may be defined that separate classes of data; [0050]); determine that the input data is necessary for the learning of the recognition logic (Ma: [0050]; [FIG. 2(230)(235)]) and determine that the input data is not necessary for the learning of the recognition logic (Ma: [0050]; [FIG. 2(230)(235)]), for the benefit of classifying drivable and non-drivable vectors in a portion (examiner: predetermined) of the vehicle environment.
In considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom. The examiner respectfully submits, determining something is necessary is also determining if it is not necessary.
Ma does not explicitly recite the terminology "criterion". However, Ma is replete with the use of hyperplanes for defining, updating, and transforming drivable vectors. Which, implies a "criterion".
In this case, a "predetermined hyperplane" is interpreted as a 2D plane in a 3D space, in a portion (examiner: predetermined) of the vehicle environment that separates data points (used for training, not used for training).
In this case, "intermediate" is interpreted as any part of the process after initiation because of the perpetual updating, including after a final rendering, described in the instant specification [0014, 0088-0089, 0120-0130].
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Kim to include creating vectors using a neural network taught by Ma. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to classify drivable and non-drivable vectors in a portion of the vehicle environment.
Kim, as modified, does not explicitly disclose, a determination based on “when the vector in the intermediate stage is located in a same direction with respect to the hyperplane in the vector space”; or “when the vector in the intermediate stage is located in a different direction with respect to the hyperplane in the vector space”.
However, in the same field of endeavor, Zadeh discloses, when the vector in the intermediate stage is located in a same direction with respect to the hyperplane in the vector space (Zadeh: [3008] We present a method and system for iterative preprocessing for training a support vector machine, e.g., for a large dataset, based on balancing the center of mass of input data, e.g., within a variable margin about the hyperplane. At each iteration, the input data is projected on the hyperplane (or on a vector parallel to the hyperplane), and the imbalance of the center of mass for different classes within a variable margin is used to update the direction of the hyperplane within the feature space, in addition to other factors including the estimate of slack error changes due data points entering and exiting the margin. In one embodiment, an estimate for the margin and the regularization constant is provided based on scanning/counting an ordered list of projected data points on a direction perpendicular to the hyperplane. In one embodiment, a fuzzy membership function for data points is used as an input (or estimated), for example, to determine center of mass and/or count data points which violate the margin; [3010] Support vector machines (SVMs) are powerful tools for classification of input data based on structural risk minimization. SVM uses a hyperplane (within the input space, in case of linear SVM, or in a feature space, in case of non-linear SVM) to separate the input data based on their classification while maximizing the margin from the input data; [3021-3025]); when the vector in the intermediate stage is located in a different direction with respect to the hyperplane in the vector space (Ma: [3008] We present a method and system for iterative preprocessing for training a support vector machine, e.g., for a large dataset, based on balancing the center of mass of input data, e.g., within a variable margin about the hyperplane. At each iteration, the input data is projected on the hyperplane (or on a vector parallel to the hyperplane), and the imbalance of the center of mass for different classes within a variable margin is used to update the direction of the hyperplane within the feature space, in addition to other factors including the estimate of slack error changes due data points entering and exiting the margin. In one embodiment, an estimate for the margin and the regularization constant is provided based on scanning/counting an ordered list of projected data points on a direction perpendicular to the hyperplane. In one embodiment, a fuzzy membership function for data points is used as an input (or estimated), for example, to determine center of mass and/or count data points which violate the margin; [3010] Support vector machines (SVMs) are powerful tools for classification of input data based on structural risk minimization. SVM uses a hyperplane (within the input space, in case of linear SVM, or in a feature space, in case of non-linear SVM) to separate the input data based on their classification while maximizing the margin from the input data; [3021-3025]), for the benefit of separating and classifying collected data to estimate sample errors.
In considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom. The examiner respectfully submits, updating only when a vector is parallel implies not updating when the vector is not parallel.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by modified Kim to include updating conditions taught by Ma. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to separate and classify collected data to estimate sample errors and updating.
REGARDING CLAIM 2, Kim, as modified, remains as applied above to claim 1, and further, Kim also discloses, the sensor includes: at least one of a camera that acquires an image of a surrounding object of the autonomous vehicle, a light detection and ranging (LiDAR) that detects a location of the surrounding object, a radio detecting and ranging (radar), or an ultrasonic sensor (Kim: [0052]; [0091-0093, 0096]; [0102]; [0104]; [0111]).
REGARDING CLAIM 3, Kim, as modified, remains as applied above to claim 1, and further, Kim also discloses, a communication device configured to communicate with a server (Kim: [FIG. 3(arrows and element 200)]), wherein the processor is configured to: determine whether it is possible to update the learning model from the server, through the communication device; and update the learning model through the server when it is possible to update the learning model (Kim: [0197]; [0204]).
Kim does explicitly recite the terminology "determine whether it is possible to update ...". However, Kim discloses updating AI while leveraging or considering stored training and test data, which is interpreted as determining "possible". Further, "updating" per se suggests that it has been determined "possible" (also see [0179] … only a much smaller amount of training data may be stored rather than all the sensor data being stored as the training data ... since the sensor data which is well determined by the artificial intelligence model is excluded from the training data …).
REGARDING CLAIM 6, Kim, as modified, remains as applied above to claim 1, and further, Kim also discloses, the recognition logic includes: logic that performs at least one of detection, recognition, classification, or segmentation for a surrounding object of the autonomous vehicle based on the acquired input data (Kim: [0158]; [0161]).
REGARDING CLAIM 7, Kim, as modified, remains as applied above to claim 1, and further, Kim also discloses, the processor is configured to: determine whether the acquired input data is necessary for the learning of the recognition logic, through the learning model based on the acquired input data and a result of applying the acquired input data to the recognition logic (Kim: [0054]; [0077]; [0197]; [0204]).
Kim does explicitly recite the terminology "determine whether the acquired input data is necessary ...". However, Kim discloses updating AI while considering stored training and test data, and determining value of new data, which is interpreted as determining "necessary" (also see [0179] … only a much smaller amount of training data may be stored rather than all the sensor data being stored as the training data ... since the sensor data which is well determined by the artificial intelligence model is excluded from the training data …).
REGARDING CLAIM 8, Kim, as modified, remains as applied above to claim 7, and further, Kim also discloses, the result of applying the acquired input data to the recognition logic includes (Kim: [ABS]): at least one of information about a two-dimensional (2D) location of a surrounding object of the autonomous vehicle, information about a three-dimensional (3D) location of the surrounding object (Kim: [0111-0112]), a type of the surrounding object (Kim: [0107]), or reliability (Kim: [ABS]; [0010]; [0159-0169]).
REGARDING CLAIM 10, Kim, as modified, remains as applied above to claim 8, and further, Kim also discloses, the information about the 3D location of the surrounding object includes: information about at least one of a location, a size, or an approach angle of the surrounding object (Kim: [0111-0112]).
The examiner respectfully submits that, that to one of ordinary skill, 3d point cloud data and area map (disclosure replete with references to SLAM) suggests or implies location, size or how to approach.
REGARDING CLAIM 12, Kim, as modified, remains as applied above to claim 1, and further, Kim also discloses, determine whether the acquired input data is necessary for the learning of the recognition logic, through the ANN-based learning model (Kim: [0054]; [0077]; [0179]; [0183]; [0197]; [0204]) including at least one of one or more convolutional neural networks, batch normalization, or an activation layer (Kim: [0026-0027]).
REGARDING CLAIM 13, Kim, as modified, remains as applied above to claim 1, and further, Kim also discloses, determine whether the acquired input data is necessary for the learning of the recognition logic, based on whether a result value output through the learning model exceeds a predetermined threshold value (Kim: [0054]; [0077]; [0179]).
The examiner respectfully submits that determining a performance improvement based upon the value of new input based upon a reference value, is interpreted as determining the value of data based upon a reference value.
REGARDING CLAIM 14, Kim discloses, a sensor included in an autonomous vehicle and configured to acquire input data for a recognition logic for autonomous driving (Kim: [0093]; [0101]; [0104]); a processor configured to determine whether the acquired input data is necessary for the learning of the recognition logic (Kim: [0183]; [0204]), through a pre-learned artificial neural network (ANN)-based learning model (Kim: [0179]; [0183]; [0191]; [0204]); and a storage configured to store input data, which is determined to be necessary for learning of the recognition logic, from among the acquired input data (Kim: [0179]; [0197]; [0204]).
Kim does not explicitly recite the terminology "which is determined to be necessary ...". However, Kim discloses updating AI while leveraging or considering stored training and test data, selecting a small amount for updating a model versus using all data to update a model, which is interpreted as determining "necessary" because not all data is considered.
The examiner respectfully submits that Kim discloses, determine whether the acquired input data is necessary for the learning of the recognition logic (Kim: [0183]; [0204]).
Kim does not explicitly disclose, wherein the determining, by the processor, of whether the acquired input data is necessary for the learning of the recognition logic, through the pre-learned ANN-based learning model includes: calculating, by the processor, a vector value through the learning model; calculating, by the processor, one vector in an intermediate stage of a process of calculating a result value through the learning model; evaluating, by the processor, the calculated one vector through a predetermined hyperplane that is a criterion for determination; and determining, by the processor, whether the acquired input data is necessary for the learning of the recognition logic, based on the calculated vector value and the predetermined hyperplane in a vector space including the vector value, and wherein the determining, by the processor, whether the acquired input data is necessary for the learning of the recognition logic, based on the calculated vector value and the predetermined hyperplane in the vector space including the vector value includes: determining, by the processor, whether the input data is necessary for the learning of the recognition logic, depending on a location of the vector in the intermediate stage based on the hyperplane in the vector space, and wherein the determining, by the processor, whether the input data is necessary for the learning of the recognition logic, depending on the location of the vector in the intermediate stage.
However, in the same field of endeavor, Ma discloses, wherein the determining, by the processor, of whether the acquired input data is necessary for the learning of the recognition logic, through the pre-learned ANN-based learning model includes: calculating, by the processor, a vector value through the learning model (Ma: [0047]; [FIG. 2(230)(235)]); calculating, by the processor, one vector in an intermediate stage of a process of calculating a result value through the learning model (Ma: [0015]; [0017]); evaluating, by the processor, the calculated one vector through a predetermined hyperplane that is a criterion for determination (Ma: [ABS]; [0017]; [0019]; [0033]; [0040]); and determining, by the processor, whether the acquired input data is necessary for the learning of the recognition logic (Ma: [0050]; [FIG. 2(230)(235)]), based on the calculated vector value and the predetermined hyperplane in a vector space including the vector value (Ma: [0047]; [0050]; [FIG. 2(230)(235)]), and wherein the determining, by the processor, whether the acquired input data is necessary for the learning of the recognition logic (Ma: [0050]; [FIG. 2(230)(235)]), based on the calculated vector value and the predetermined hyperplane in the vector space including the vector value includes: determining, by the processor, whether the input data is necessary for the learning of the recognition logic (Ma: [0050]; [FIG. 2(230)(235)]), depending on a location of the vector in the intermediate stage (Ma: [0039]; [0050]) based on the hyperplane in the vector space (Ma: [0039-0040]; [0050]), and wherein the determining, by the processor, whether the input data is necessary for the learning of the recognition logic, depending on the location of the vector in the intermediate stage (Ma: [0050]; [FIG. 2(230)(235)]), for the benefit of classifying drivable and non-drivable vectors in a portion of the vehicle environment.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Kim to include creating vectors using a neural network taught by Ma. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to classify drivable and non-drivable vectors in a portion of the vehicle environment.
Kim, as modified, does not explicitly disclose, based on the hyperplane in the vector space includes: determining, by the processor, determine that the input data is necessary for the learning of the recognition logic when the vector in the intermediate stage is located in a same direction with respect to the hyperplane in the vector space; and determining, by the processor, determine that the input data is not necessary for the learning of the recognition logic when the vector in the intermediate stage is located in a different direction with respect to the hyperplane in the vector space.
However, in the same field of endeavor, Zadeh discloses, based on the hyperplane in the vector space includes: determining, by the processor, determine that the input data is necessary for the learning of the recognition logic when the vector in the intermediate stage is located in a same direction with respect to the hyperplane in the vector space (Zadeh: [3008]; [3010]; [3021-3025]); and determining, by the processor, determine that the input data is not necessary for the learning of the recognition logic when the vector in the intermediate stage is located in a different direction with respect to the hyperplane in the vector space (Ma: [3008]; [3010]; [3021-3025]), for the benefit of separating and classifying collected data to estimate sample errors.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by modified Kim to include updating conditions taught by Ma. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to separate and classify collected data to estimate sample errors and updating.
REGARDING CLAIM 16, Kim, as modified, remains as applied above to claim 14, and further, Kim also discloses, determining, by the processor, whether the acquired input data is necessary for the learning of the recognition logic, through the learning model based on the acquired input data and a result of applying the acquired input data to the recognition logic (Kim: [0054]; [0077]; [0179]; [0183]; [0197]; [0204]).
The examiner respectfully submits, determining values and improvements (used to update learning model), is pre se determining "whether the acquired input data is necessary for the learning of the recognition logic".
REGARDING CLAIM 17, Kim, as modified, remains as applied above to claim 16, and further, Kim also discloses, the result of applying the acquired input data to the recognition logic includes: at least one of information about a 2D location of a surrounding object of the autonomous vehicle, information about a 3D location of the surrounding object, a type of the surrounding object, or reliability (Kim: [0111-0112]; [ABS]; [0010]; [0159-0169]).
REGARDING CLAIM 19, Kim, as modified, remains as applied above to claim 14, and further, Kim also discloses, determining, by the processor, whether the acquired input data is necessary for the learning of the recognition logic, through the ANN-based learning model (Kim: [0054]; [0077]; [0179]; [0183]; [0204]) including at least one of one or more convolutional neural networks, batch normalization, or an activation layer (Kim: [0026-0027]).
REGARDING CLAIM 20, Kim, as modified, remains as applied above to claim 14, and further, Kim also discloses, determining, by the processor, whether the acquired input data is necessary for the learning of the recognition logic, based on whether a result value output through the learning model exceeds a predetermined threshold value (Kim: [0054]; [0077]; [0179]).
Claim(s) 4 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) and Zadeh (US 20220121884 A1) as applied to claims 1 and 14 above, and further in view of Karpathy (US 20210271259 A1).
REGARDING CLAIM 4, Kim, as modified, remains as applied above to claim 1, and further, Kim also discloses, a communication device configured to communicate with a server (Kim: [0049]), wherein the processor is configured to: transmit the input data stored in the storage to the server through the communication device (Kim: [0049]; [0068]).
Kim, as modified, does not explicitly disclose, when a predetermined data transmission condition is satisfied.
However, in the same field of endeavor, Karpathy discloses, when a predetermined data transmission condition is satisfied (Karpathy: [0027-0028]), for the benefit of meeting a minimum requirement for determining a classifier score for classification training by the server (managing training data).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a method disclosed by a modified Kim to include data transmission based upon sample size taught by Karpathy. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to meet a minimum requirement for determining a classifier score for classification training by the server (managing training data).
REGARDING CLAIM 15, Kim, as modified, remains as applied above to claim 14, and further, Kim also discloses, transmitting, by the processor, the input data stored in the storage to a server through a communication device communicating with the server (Kim: [0049]; [0068]).
Kim, as modified, does not explicitly disclose, when a predetermined data transmission condition is satisfied.
However, in the same field of endeavor, Karpathy discloses, when a predetermined data transmission condition is satisfied (Karpathy: [0027-0028]), motivation addressed (supra).
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1), Zadeh (US 20220121884 A1), and Karpathy (US 20210271259 A1) as applied to claim 4 above, and further in view of Nakagawa (US 20220105829 A1).
REGARDING CLAIM 5, Kim, as modified, remains as applied above to claim 4, and further, Kim, as modified, does not explicitly disclose, the data transmission condition includes: at least one of a condition that the autonomous vehicle is charged, or a condition that the autonomous vehicle is parked in a garage.
However, in the same field of endeavor, Nakagawa discloses, the data transmission condition includes: at least one of a condition that the autonomous vehicle is charged, or a condition that the autonomous vehicle is parked in a garage (Nakagawa: [0008]; [0129]), for the benefit of estimating a future degradation state (state of health (SOH)) of a driving battery from an estimation map from which the future of the SOH of a battery under a specific usage state can be estimated, and receive usage and charging instructions for a vehicle control from the server.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a method disclosed by a modified Kim to include SOH for a server to determine usage instructions taught by Nakagawa. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to estimating a future degradation state (state of health (SOH)) of a driving battery from an estimation map from which the future of the SOH of a battery under a specific usage state can be estimated, and receive usage and charging instructions for a vehicle control from the server.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) and Zadeh (US 20220121884 A1) as applied to claims 8, above, and further in view of Ishibashi (US 20190278080 A1).
REGARDING CLAIM 9, Kim, as modified, remains as applied above to claim 8, and further, Ma discloses, the information about the 2D location of the surrounding object includes: information about location coordinates of a bounding box of the surrounding object (Ma: [FIG. 3(320)(362)]).
However, should it be found that Ma does not disclose information about the 2D location, in the same field of endeavor, Ishibashi discloses, the information about the 2D location of the surrounding object includes: information about location coordinates of a bounding box of the surrounding object (Ishibashi: [0051]), for the benefit of mapping obstacles (positions and shapes) of respective portions of a road on which the vehicle is running, and determining respective running lanes and transitions of the road for a prescribed area ahead of the vehicle in its running direction.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a method disclosed by a modified Kim to include protected regions around an object/obstacle taught by Ishibashi. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to map obstacles (positions and shapes) of respective portions of a road on which the vehicle is running, and determining respective running lanes and transitions of the road for a prescribed area ahead of the vehicle in its running direction.
Response to Arguments
Applicant’s arguments, received 08-29-2025 and beginning on page 9, with respect to the rejection of claim 1 under 35 USC §101, abstract idea, have been fully considered and are persuasive. The rejection of claim 1 under 35 USC §101 has been withdrawn.
Applicant's arguments filed 08-29-2025, and beginning on page 13, have been fully considered but they are not persuasive. The applicant has contended, to the examiner’s best understanding, Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) does not explicitly disclose the characteristic of determining whether the data is necessary for the learning data depending on whether a location of the vector is located in a same direction or a different direction with respect to the hyperplane in the vector space that is subjected to “the vector in the intermediate stage of the input data evaluation.” Specifically, Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) fails to disclose:
determine that the input data is necessary for the learning of the recognition logic when the vector in the intermediate stage is located in a same direction with respect to the hyperplane in the vector space; and
and determine that the input data is not necessary for the learning of the recognition logic when the vector in the intermediate stage is located in a different direction with respect to the hyperplane in the vector space.
However, Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) are not applied to the contentious limitations present in the applicant’s arguments. Because Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) are not relied upon for disclosing the contentious limitations, the applicant’s arguments are considered moot because the new ground of rejection does not rely on the combined references of Kim (US 20200042832 A1) in view of Ma (US 20210081724 A1) for matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.S./Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663