DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/29/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant's arguments filed with respect to the rejections of claims 1-20 under 35 U.S.C. §101 have been fully considered but they are not persuasive. Examiner notes that, in order for an improvement to integrate a judicial exception into a practical application, the judicial exception alone cannot provide the improvement. See MPEP 2106.05(a). Applicant’s cited improvements to the field of truck load identification is provided entirely by the judicial exception.
Applicant’s arguments with respect to the rejections of claims 1-20 under 35 U.S.C. §103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of further limiting amendments made to the claims, changing the scope of the claimed invention.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Independent claims 1, 11, and 20 are directed to a method, device, and non-transitory storage medium, respectively, for truck load identification. Therefore, the independent claims are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous independent claims, claims 11 and 20, are rejected for the same reasons as the representative claim 1 as discussed here. Claim 1 recites:
A truck load identification method, comprising:
obtaining a to-be-identified image;
in response to a truck being identified in the to-be-identified image, identifying a plurality of key points of a cargo area of the truck in the to-be-identified image;
obtaining a cargo area map comprising only the cargo area based on the plurality of key points;
obtaining a cargo attribute of the cargo area by identifying the cargo area map, wherein the cargo attribute of the cargo area comprises: Unloading, Striped texture, Regular block texture, and Irregular block texture;
and in response to determining that a current state of the truck is a preset state based on the cargo attribute of the cargo area, conducting a warning prompt.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, the steps of identifying a truck, identifying a plurality of key points, obtaining a cargo area map, obtaining a cargo attribute, and determining a current state of the truck in the context of this claim encompass a person looking at data collected (received, detected, etc.) and forming a simple judgement (determination, analysis, comparison, etc.) either mentally or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A truck load identification method, comprising:
obtaining a to-be-identified image;
in response to a truck being identified in the to-be-identified image, identifying a plurality of key points of a cargo area of the truck in the to-be-identified image;
obtaining a cargo area map comprising only the cargo area based on the plurality of key points;
obtaining a cargo attribute of the cargo area by identifying the cargo area map, wherein the cargo attribute of the cargo area comprises: Unloading, Striped texture, Regular block texture, and Irregular block texture;
and in response to determining that a current state of the truck is a preset state based on the cargo attribute of the cargo area, conducting a warning prompt.
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations above, the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (processor) to perform the process. In particular, the step of obtaining an image is recited at a high level of generality (i.e. as a general means of receiving information for use in the determining and other steps), and amounts to no more than mere data gathering necessary to perform the abstract idea, which is a form of insignificant extra-solution activity. The step of conducting a warning prompt is also recited at a high level of generality and amounts to no more than mere post solution action, which is a form of insignificant extra-solution activity. Lastly, claims 1, 11, and 20 further recite an electronic device, a processor, a memory, a communication circuit, and a non-transitory computer-readable storage medium. These limitations merely describe how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. See Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). The device(s) and processor(s) are recited at a high level of generality and merely automates the steps.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 9 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the steps amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solution activities.
The additional limitations of obtaining an image and conducting a warning prompt are well-understood, routine and conventional activity because the specification does not provide any indication that the image is anything other than conventional image data, nor that the warning prompt is anything other than conventional output means. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, the claim is not patent eligible.
Dependent claims 2-10 and 12-19 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the abstract idea (mental processes or mathematical concepts) and/or additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-10 and 12-19 are not patent eligible under the same rationale as provided for in the rejection of claim 1.
Therefore, claims 1-20 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-7, 10-13, 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20120114181 A1, filed 11/01/2011, hereinafter “Borthwick”, in view of US 20190286153 A1, filed 03/15/2019, hereinafter “Rankawat”, further in view of US 20160189360 A1, with an earliest priority date of 12/30/2014, hereinafter “Kang”.
Regarding claim 1, Borthwick teaches a truck load identification method. See at least [0056], [0072], and figures 2-3.
comprising: obtaining a to-be-identified image. See at least [0056] and figure 2, step 210, wherein a stereo image of a truck is obtained.
in response to a truck being identified in the to-be-identified image, identifying a plurality of key points of a cargo area of the truck in the to-be-identified image. See at least [0067]-[0070], [0072], figures 6A-6C, and figure 7, wherein a truck is identified in the stereo image. A number of key model points are identified via a PnP method based on the identified truck.
obtaining a cargo area map comprising only the cargo area based on the plurality of key points. See at least [0056], [0068]-[0070], [0072], [0091], and figures 6c and 9A-C, wherein the identified model points are used to converge a truck model with the image data, resulting in a cargo area map comprising only the cargo area.
obtaining a cargo attribute of the cargo area by identifying the cargo area map. See at least [0072]-[0073] and figures 10A-B, wherein a cargo attribute of the cargo area is obtained based on the determined model map.
wherein the cargo attribute of the cargo area comprises: Unloading. See at least [0091] and figure 8A, wherein the cargo area is identified as empty. Per [0028] of Applicant’s specification, a truck with no cargo on it is attributed as “Unloading”.
and in response to determining that a current state of the truck is a preset state based on the cargo attribute of the cargo area, conducting a prompt. See at least [0075]-[0077], wherein feedback prompts are provided based on the truck being in a preset state based on the identified cargo attribute (e.g., if a truck’s cargo is loaded off-center).
Borthwick remains silent on the cargo attributes including Striped texture, Regular block texture, and Irregular block texture, and conducting a warning prompt. As discussed above, Borthwick is directed towards providing monitoring and feedback to an operator of the vehicle, rather than a warning.
Rankawat teaches conducting a warning prompt. See at least [0219]-[0221], wherein a warning is output based on the results of image recognition.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of conducting a warning prompt. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Kang teaches Striped texture, Regular block texture, and Irregular block texture. See at least [0052]-[0054] and figure 6, steps S64-S65, wherein cargo features are extracted from an image of the cargo region based on textural features of the image. A stack mode is predicted based on the features. See at least [0049] and figure 3, wherein the stack modes include a first stack mode (striped texture), a second stack mode (irregular texture), a third stack mode (regular block texture) and a fourth stack mode (regular block texture).
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Kang’s technique of identifying cargo attributes based on a cargo having a striped texture, regular block texture, and irregular block texture. It would have been obvious to modify because doing so enables improved efficiency and accuracy for cargo inspection, as recognized by Kang (see at least [0003] and [0024]-[0025]).
Regarding claim 2, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 2 as discussed above, and Borthwick additionally teaches wherein the cargo area comprises a plurality of level sub-areas, and the cargo area map comprises a plurality of level sub-area maps. See at least [0067] and figures 6A-C, 8A-C, and 9A-C, wherein the cargo area comprises a plurality of level sub-areas (planes 510, 512, 514, 516 and 518), and the cargo area map comprises a plurality of level sub-area maps corresponding to the same planes. Additionally, see at least [0072]-[0073] and figures 10A-C, wherein the cargo area and corresponding map is divided into a grid.
the obtaining the cargo area map comprising only the cargo area based on the plurality of key points comprises: obtaining the plurality of level sub-area maps each of which comprises only one corresponding level sub-area based on the plurality of key points. See at least [0067]-[0069] and figures 6A-C and 8A-C, wherein the different planes of the cargo area are used to create different portions of the model map. The sub-area maps shown in figures 6A-6C correspond to planes 510 and 518 of the truck. Additionally, see at least [0072]-[0073] and figures 10A-C, wherein the cargo area and corresponding map is divided into a grid.
Regarding claim 3, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 1 as discussed above, and Borthwick additionally teaches wherein obtaining the cargo attribute of the cargo area by identifying the cargo area map comprises: obtaining cargo attributes of the plurality of level sub-areas by identifying each of the plurality of level sub-area maps; obtaining the cargo attribute of the cargo area based on the cargo attributes of the plurality of level sub-areas. See at least [0072]-[0073] and figures 10A-C, wherein the cargo area is divided into a grid, and a payload volume attribute is calculated for each sub-area. The total payload volume attribute for the whole cargo area is determined based on the individual attributes calculated for each sub-area.
Regarding claim 6, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 1 as discussed above, and Borthwick remains silent on wherein identifying the plurality of key points of the cargo area of the truck in the to-be-identified image comprises: obtaining the plurality of key points of the cargo area of the truck by identifying the to-be-identified image through an identification module of a first neural network; the obtaining the cargo attribute of the cargo area by identifying the cargo area map comprises: obtaining the cargo attribute of the cargo area by identifying the cargo area map through a classification module of the first neural network connected in series with the identification module.
Rankawat teaches wherein identifying the plurality of key points of the cargo area of the truck in the to-be-identified image comprises: obtaining the plurality of key points of the cargo area of the truck by identifying the to-be-identified image through an identification module of a first neural network. See at least [0100] and figure 1B, wherein the image identification is performed through an identification module (128) of a first neural network (104A). The identification module 128 takes the input data and outputs boundary points, or key points.
the obtaining the cargo attribute of the cargo area by identifying the cargo area map comprises: obtaining the cargo attribute of the cargo area by identifying the cargo area map through a classification module of the first neural network connected in series with the identification module. See at least [0100] and figure 1B, wherein classification is performed by classification module (130) of the first neural network 104A, which outputs classification labels based on the input data.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of using an identification module and a classification module, respectively, to obtain key points and classifications of input data. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Regarding claim 7, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 6 as discussed above, and Borthwick additionally teaches obtaining a sample image, wherein the sample image comprises a truck. See at least [0056] and figure 2, step 210, wherein a stereo image of a truck is obtained.
predicting a plurality of key points of a cargo area of the truck in the sample image by inputting the sample image to the identification module. See at least [0067]-[0070], [0072], figures 6A-6C, and figure 7, wherein a truck is identified in the stereo image. A number of key model points are identified via a PnP method based on the identified truck.
obtaining a cargo area map comprising only the cargo area based on the predicted plurality of key points. See at least [0056], [0068]-[0070], [0072], [0091], and figures 6c and 9A-C, wherein the identified model points are used to converge a truck model with the image data, resulting in a cargo area map comprising only the cargo area.
predicting a cargo attribute of the cargo area by inputting the cargo area map to the classification module. See at least [0072]-[0073] and figures 10A-B, wherein a cargo attribute of the cargo area is obtained based on the determined model map.
Borthwick remains silent on obtaining a first loss function value based on a prediction result of the identification module; obtaining a second loss function value based on a prediction result of the classification module; obtaining a total loss function value based on the first loss function value and the second loss function value; reducing the total loss function value by updating parameters of the identification module and the classification module; and repeatedly performing operations from the predicting the plurality of key points of the cargo area of the truck in the sample image by inputting the sample image to the identification module to the reducing the total loss function value by updating parameters of the identification module and the classification module, until a preset condition for stopping training is met.
Rankawat teaches obtaining a first loss function value based on a prediction result of the identification module. See at least [0125], wherein a first loss function value (L1 loss) is obtained from a first loss function (eq. 13) as a result of the identified key points.
obtaining a second loss function value based on a prediction result of the classification module. See at least [0126], wherein a second loss function value (cross entropy loss) is obtained from a second loss function (eq. 14) as a result of the classified class labels.
obtaining a total loss function value based on the first loss function value and the second loss function value. See at least [0127], wherein a weighted combination of the first loss and the second loss are used to obtain a final total loss.
reducing the total loss function value by updating parameters of the identification module and the classification module, and repeatedly performing operations from the predicting the plurality of key points of the cargo area of the truck in the sample image by inputting the sample image to the identification module to the reducing the total loss function value by updating parameters of the identification module and the classification module, until a preset condition for stopping training is met. See at least [0123]-[0124] and [0129], wherein the training process updates parameters (weights and biases) of the first neural network 104 to reduce the total loss, and reiterates this process until the trained parameters meet an optimum condition.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of obtaining a first loss function value, a second loss function value, a total loss function value, and reiteratively training the model by updating parameters to minimize loss until the model reaches an optimum condition. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Regarding claim 10, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 7 as discussed above, and Borthwick remains silent on wherein the obtaining the total loss function value based on the first loss function value and the second loss function value comprises: obtaining the total loss function value by weighted summing the first loss function value and the second loss function value.
Rankawat teaches wherein the obtaining the total loss function value based on the first loss function value and the second loss function value comprises: obtaining the total loss function value by weighted summing the first loss function value and the second loss function value. See at least [0127], wherein the total final loss is obtained by a weighted combination of the first loss function value and the second loss function value.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of obtaining a total loss value by a weighted combination of the first and second loss function values. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Regarding claim 11, Borthwick teaches an electronic device, comprising: a processor, a memory and a communication circuit; wherein the processor is respectively coupled to the memory and the communication circuit; during operation, the processor controls the processor itself, the memory, and the communication circuit. See at least [0013], wherein the system is implemented via a computing system in communication with at least one stereo camera.
to implement: obtaining a to-be-identified image. See at least [0056] and figure 2, step 210, wherein a stereo image of a truck is obtained.
in response to a truck being identified in the to-be-identified image, identifying a plurality of key points of a cargo area of the truck in the to-be-identified image. See at least [0067]-[0070], [0072], figures 6A-6C, and figure 7, wherein a truck is identified in the stereo image. A number of key model points are identified via a PnP method based on the identified truck.
obtaining a cargo area map comprising only the cargo area based on the plurality of key points. See at least [0056], [0068]-[0070], [0072], [0091], and figures 6c and 9A-C, wherein the identified model points are used to converge a truck model with the image data, resulting in a cargo area map comprising only the cargo area.
obtaining a cargo attribute of the cargo area by identifying the cargo area map. See at least [0072]-[0073] and figures 10A-B, wherein a cargo attribute of the cargo area is obtained based on the determined model map.
wherein the cargo attribute of the cargo area comprises: Unloading. See at least [0091] and figure 8A, wherein the cargo area is identified as empty. Per [0028] of Applicant’s specification, a truck with no cargo on it is attributed as “Unloading”.
and in response to determining that a current state of the truck is a preset state based on the cargo attribute of the cargo area, conducting a prompt. See at least [0075]-[0077], wherein feedback prompts are provided based on the truck being in a preset state based on the identified cargo attribute (e.g., if a truck’s cargo is loaded off-center).
Borthwick remains silent on the cargo attributes including Striped texture, Regular block texture, and Irregular block texture, and conducting a warning prompt. As discussed above, Borthwick is directed towards providing monitoring and feedback to an operator of the vehicle, rather than a warning.
Rankawat teaches conducting a warning prompt. See at least [0219]-[0221], wherein a warning is output based on the results of image recognition.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of conducting a warning prompt. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Kang teaches Striped texture, Regular block texture, and Irregular block texture. See at least [0052]-[0054] and figure 6, steps S64-S65, wherein cargo features are extracted from an image of the cargo region based on textural features of the image. A stack mode is predicted based on the features. See at least [0049] and figure 3, wherein the stack modes include a first stack mode (striped texture), a second stack mode (irregular texture), a third stack mode (regular block texture) and a fourth stack mode (regular block texture).
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Kang’s technique of identifying cargo attributes based on a cargo having a striped texture, regular block texture, and irregular block texture. It would have been obvious to modify because doing so enables improved efficiency and accuracy for cargo inspection, as recognized by Kang (see at least [0003] and [0024]-[0025]).
Regarding claim 12, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 11 as discussed above, and Borthwick additionally teaches wherein the cargo area comprises a plurality of level sub-areas, and the cargo area map comprises a plurality of level sub-area maps. See at least [0067] and figures 6A-C, 8A-C, and 9A-C, wherein the cargo area comprises a plurality of level sub-areas (planes 510, 512, 514, 516 and 518), and the cargo area map comprises a plurality of level sub-area maps corresponding to the same planes. Additionally, see at least [0072]-[0073] and figures 10A-C, wherein the cargo area and corresponding map is divided into a grid.
the obtaining the cargo area map comprising only the cargo area based on the plurality of key points comprises: obtaining the plurality of level sub-area maps each of which comprises only one corresponding level sub-area based on the plurality of key points. See at least [0067]-[0069] and figures 6A-C and 8A-C, wherein the different planes of the cargo area are used to create different portions of the model map. The sub-area maps shown in figures 6A-6C correspond to planes 510 and 518 of the truck. Additionally, see at least [0072]-[0073] and figures 10A-C, wherein the cargo area and corresponding map is divided into a grid.
Regarding claim 13, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 12 as discussed above, and Borthwick additionally teaches wherein obtaining the cargo attribute of the cargo area by identifying the cargo area map comprises: obtaining cargo attributes of the plurality of level sub-areas by identifying each of the plurality of level sub-area maps; obtaining the cargo attribute of the cargo area based on the cargo attributes of the plurality of level sub-areas. See at least [0072]-[0073] and figures 10A-C, wherein the cargo area is divided into a grid, and a payload volume attribute is calculated for each sub-area. The total payload volume attribute for the whole cargo area is determined based on the individual attributes calculated for each sub-area.
Regarding claim 16, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 11 as discussed above, and Borthwick remains silent on wherein identifying the plurality of key points of the cargo area of the truck in the to-be-identified image comprises: obtaining the plurality of key points of the cargo area of the truck by identifying the to-be-identified image through an identification module of a first neural network; the obtaining the cargo attribute of the cargo area by identifying the cargo area map comprises: obtaining the cargo attribute of the cargo area by identifying the cargo area map through a classification module of the first neural network connected in series with the identification module.
Rankawat teaches wherein identifying the plurality of key points of the cargo area of the truck in the to-be-identified image comprises: obtaining the plurality of key points of the cargo area of the truck by identifying the to-be-identified image through an identification module of a first neural network. See at least [0100] and figure 1B, wherein the image identification is performed through an identification module (128) of a first neural network (104A). The identification module 128 takes the input data and outputs boundary points, or key points.
the obtaining the cargo attribute of the cargo area by identifying the cargo area map comprises: obtaining the cargo attribute of the cargo area by identifying the cargo area map through a classification module of the first neural network connected in series with the identification module. See at least [0100] and figure 1B, wherein classification is performed by classification module (130) of the first neural network 104A, which outputs classification labels based on the input data.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of using an identification module and a classification module, respectively, to obtain key points and classifications of input data. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Regarding claim 17, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 16 as discussed above, and Borthwick additionally teaches obtaining a sample image, wherein the sample image comprises a truck. See at least [0056] and figure 2, step 210, wherein a stereo image of a truck is obtained.
predicting a plurality of key points of a cargo area of the truck in the sample image by inputting the sample image to the identification module. See at least [0067]-[0070], [0072], figures 6A-6C, and figure 7, wherein a truck is identified in the stereo image. A number of key model points are identified via a PnP method based on the identified truck.
obtaining a cargo area map comprising only the cargo area based on the predicted plurality of key points. See at least [0056], [0068]-[0070], [0072], [0091], and figures 6c and 9A-C, wherein the identified model points are used to converge a truck model with the image data, resulting in a cargo area map comprising only the cargo area.
predicting a cargo attribute of the cargo area by inputting the cargo area map to the classification module. See at least [0072]-[0073] and figures 10A-B, wherein a cargo attribute of the cargo area is obtained based on the determined model map.
Borthwick remains silent on obtaining a first loss function value based on a prediction result of the identification module; obtaining a second loss function value based on a prediction result of the classification module; obtaining a total loss function value based on the first loss function value and the second loss function value; reducing the total loss function value by updating parameters of the identification module and the classification module; and repeatedly performing operations from the predicting the plurality of key points of the cargo area of the truck in the sample image by inputting the sample image to the identification module to the reducing the total loss function value by updating parameters of the identification module and the classification module, until a preset condition for stopping training is met.
Rankawat teaches obtaining a first loss function value based on a prediction result of the identification module. See at least [0125], wherein a first loss function value (L1 loss) is obtained from a first loss function (eq. 13) as a result of the identified key points.
obtaining a second loss function value based on a prediction result of the classification module. See at least [0126], wherein a second loss function value (cross entropy loss) is obtained from a second loss function (eq. 14) as a result of the classified class labels.
obtaining a total loss function value based on the first loss function value and the second loss function value. See at least [0127], wherein a weighted combination of the first loss and the second loss are used to obtain a final total loss.
reducing the total loss function value by updating parameters of the identification module and the classification module, and repeatedly performing operations from the predicting the plurality of key points of the cargo area of the truck in the sample image by inputting the sample image to the identification module to the reducing the total loss function value by updating parameters of the identification module and the classification module, until a preset condition for stopping training is met. See at least [0123]-[0124] and [0129], wherein the training process updates parameters (weights and biases) of the first neural network 104 to reduce the total loss, and reiterates this process until the trained parameters meet an optimum condition.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of obtaining a first loss function value, a second loss function value, a total loss function value, and reiteratively training the model by updating parameters to minimize loss until the model reaches an optimum condition. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Regarding claim 20, Borthwick teaches a non-transitory computer-readable storage medium, storing a computer program, wherein the computer program is capable of being executed by a processor. See at least [0013], wherein the system is implemented via a computing system in communication with at least one stereo camera.
to implement: obtaining a to-be-identified image. See at least [0056] and figure 2, step 210, wherein a stereo image of a truck is obtained.
in response to a truck being identified in the to-be-identified image, identifying a plurality of key points of a cargo area of the truck in the to-be-identified image. See at least [0067]-[0070], [0072], figures 6A-6C, and figure 7, wherein a truck is identified in the stereo image. A number of key model points are identified via a PnP method based on the identified truck.
obtaining a cargo area map comprising only the cargo area based on the plurality of key points. See at least [0056], [0068]-[0070], [0072], [0091], and figures 6c and 9A-C, wherein the identified model points are used to converge a truck model with the image data, resulting in a cargo area map comprising only the cargo area.
obtaining a cargo attribute of the cargo area by identifying the cargo area map. See at least [0072]-[0073] and figures 10A-B, wherein a cargo attribute of the cargo area is obtained based on the determined model map.
wherein the cargo attribute of the cargo area comprises: Unloading. See at least [0091] and figure 8A, wherein the cargo area is identified as empty. Per [0028] of Applicant’s specification, a truck with no cargo on it is attributed as “Unloading”.
and in response to determining that a current state of the truck is a preset state based on the cargo attribute of the cargo area, conducting a prompt. See at least [0075]-[0077], wherein feedback prompts are provided based on the truck being in a preset state based on the identified cargo attribute (e.g., if a truck’s cargo is loaded off-center).
Borthwick remains silent on the cargo attributes including Striped texture, Regular block texture, and Irregular block texture, and conducting a warning prompt. As discussed above, Borthwick is directed towards providing monitoring and feedback to an operator of the vehicle, rather than a warning.
Rankawat teaches conducting a warning prompt. See at least [0219]-[0221], wherein a warning is output based on the results of image recognition.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of conducting a warning prompt. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Kang teaches Striped texture, Regular block texture, and Irregular block texture. See at least [0052]-[0054] and figure 6, steps S64-S65, wherein cargo features are extracted from an image of the cargo region based on textural features of the image. A stack mode is predicted based on the features. See at least [0049] and figure 3, wherein the stack modes include a first stack mode (striped texture), a second stack mode (irregular texture), a third stack mode (regular block texture) and a fourth stack mode (regular block texture).
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Kang’s technique of identifying cargo attributes based on a cargo having a striped texture, regular block texture, and irregular block texture. It would have been obvious to modify because doing so enables improved efficiency and accuracy for cargo inspection, as recognized by Kang (see at least [0003] and [0024]-[0025]).
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Borthwick, Rankawat, and Kang as applied to claims above, and further in view of US 20210383549 A1, with an earliest priority date of 06/04/2019, hereinafter “Wang”, in view of US 20080294401 A1, filed 05/19/2008, hereinafter “Tsin”.
Regarding claim 4, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 2 as discussed above, and Borthwick remains silent on wherein the obtaining the plurality of level sub-area maps each of which comprises only one corresponding level sub-area based on the plurality of key points comprises: generating a key point mask map for each of the plurality of key points; obtaining a level sub-area mask map of a corresponding level sub-area by adding the key point mask maps of key points at the corresponding level sub-area; generating a mask map matrix by: traversing all pixels in the level sub-area mask map, configuring pixels inside a largest circumscribed polygon connected by the key points to 1; and configuring pixels outside the largest circumscribed polygon connected by the key points to 0; and obtaining the level sub-area map by performing matrix point multiplication on: the to-be-identified image or a feature map of the to-be-identified image; and the mask map matrix.
Wang teaches wherein the obtaining the plurality of level sub-area maps each of which comprises only one corresponding level sub-area based on the plurality of key points comprises: generating a key point mask map for each of the plurality of key points. See at least [0069] and figure 8, wherein a heat map is generated based on the identified key points.
obtaining a level sub-area mask map of a corresponding level sub-area by adding the key point mask maps of key points at the corresponding level sub-area. See at least [0041] and [0053], wherein a sub-area mask map is obtained by segmenting the map into a region formed by adding the key points A-D associated with the region. See at least [0049], wherein the segmented region is a mask image.
generating a mask map matrix by: traversing all pixels in the level sub-area mask map, configuring pixels inside a largest circumscribed polygon connected by the key points to 1; and configuring pixels outside the largest circumscribed polygon connected by the key points to 0. See at least [0044] and [0096], wherein a mask map matrix is generated based on traversing each pixel in the image segmentation region, labelling each pixel as 1 or 0 based on whether the pixel is within an inner portion of the polygon indicated by the key points. The mask map is generated in the format of the original image. See at least [0069]-[0070], wherein the input image is in the form of a matrix.
and obtaining the level sub-area map by: the to-be-identified image or a feature map of the to-be-identified image; and the mask map matrix. See at least [0096] and figure 10, wherein the output image map is obtained by combining the mask map information with the input to-be-identified image. Additionally, see at least [0121] and figure 12, wherein the mask map is superimposed on the to-be-identified image to obtain the level sub-area map.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Wang’s technique of generating a key point mask map for each of the plurality of key points, adding the key point mask maps to obtain a level sub-area mask map, generating a mask map matrix by traversing all pixels in the level sub-area mask map and configuring pixels based on their position within a polygon, and obtaining the level sub-area map by the image data and the mask map. It would have been obvious to modify because doing so enables image recognition with more accurate image segmentation, as recognized by Wang (see at least [0009]-[0010]).
Tsin teaches performing matrix point multiplication. See at least [0031], wherein multiplication is performed between a base shape and a parameter. See at least [0073], wherein the base shape is represented as a matrix.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Tsin’s technique of performing matrix point multiplication. It would have been obvious to modify because doing so enables accuracy in applying 3D shape models to 2D image data of consumer vehicles, as recognized by Tsin (see at least [0005]-[0011]).
Regarding claim 14, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 12 as discussed above, and Borthwick remains silent on wherein the obtaining the plurality of level sub-area maps each of which comprises only one corresponding level sub-area based on the plurality of key points comprises: generating a key point mask map for each of the plurality of key points; obtaining a level sub-area mask map of a corresponding level sub-area by adding the key point mask maps of key points at the corresponding level sub-area; generating a mask map matrix by: traversing all pixels in the level sub-area mask map, configuring pixels inside a largest circumscribed polygon connected by the key points to 1; and configuring pixels outside the largest circumscribed polygon connected by the key points to 0; and obtaining the level sub-area map by performing matrix point multiplication on: the to-be-identified image or a feature map of the to-be-identified image; and the mask map matrix.
Wang teaches wherein the obtaining the plurality of level sub-area maps each of which comprises only one corresponding level sub-area based on the plurality of key points comprises: generating a key point mask map for each of the plurality of key points. See at least [0069] and figure 8, wherein a heat map is generated based on the identified key points.
obtaining a level sub-area mask map of a corresponding level sub-area by adding the key point mask maps of key points at the corresponding level sub-area. See at least [0041] and [0053], wherein a sub-area mask map is obtained by segmenting the map into a region formed by adding the key points A-D associated with the region. See at least [0049], wherein the segmented region is a mask image.
generating a mask map matrix by: traversing all pixels in the level sub-area mask map, configuring pixels inside a largest circumscribed polygon connected by the key points to 1; and configuring pixels outside the largest circumscribed polygon connected by the key points to 0. See at least [0044] and [0096], wherein a mask map matrix is generated based on traversing each pixel in the image segmentation region, labelling each pixel as 1 or 0 based on whether the pixel is within an inner portion of the polygon indicated by the key points. The mask map is generated in the format of the original image. See at least [0069]-[0070], wherein the input image is in the form of a matrix.
and obtaining the level sub-area map by: the to-be-identified image or a feature map of the to-be-identified image; and the mask map matrix. See at least [0096] and figure 10, wherein the output image map is obtained by combining the mask map information with the input to-be-identified image. Additionally, see at least [0121] and figure 12, wherein the mask map is superimposed on the to-be-identified image to obtain the level sub-area map.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Wang’s technique of generating a key point mask map for each of the plurality of key points, adding the key point mask maps to obtain a level sub-area mask map, generating a mask map matrix by traversing all pixels in the level sub-area mask map and configuring pixels based on their position within a polygon, and obtaining the level sub-area map by the image data and the mask map. It would have been obvious to modify because doing so enables image recognition with more accurate image segmentation, as recognized by Wang (see at least [0009]-[0010]).
Tsin teaches performing matrix point multiplication. See at least [0031], wherein multiplication is performed between a base shape and a parameter. See at least [0073], wherein the base shape is represented as a matrix.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Tsin’s technique of performing matrix point multiplication. It would have been obvious to modify because doing so enables accuracy in applying 3D shape models to 2D image data of consumer vehicles, as recognized by Tsin (see at least [0005]-[0011]).
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Borthwick, Rankawat, and Kang as applied to claims above, and further in view of Tsin.
Regarding claim 5, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 2 as discussed above, and Borthwick additionally teaches wherein the plurality of level sub-areas comprise at least a first- level sub-area, a second-level sub-area, and a third-level sub-area; the first-level sub-area is defined by geometric corner points of a bottom surface of an actual cargo area of the truck; the third-level sub-area is defined by geometric corner points of a top surface of the actual cargo area of the truck. See at least [0067], figure 5, and figures 8A-C, wherein the plurality of sub-areas includes a first sub-area 514, representing a bottom surface of the truck cargo area, a second sub-area 512/510/516, and a third sub-area 518 representing a top surface of the actual cargo area of the truck.
Borthwick remains silent on the second-level sub-area is defined by geometric corner points of an upper boundary of a fence of the truck.
Tsin teaches the second-level sub-area is defined by geometric corner points of an upper boundary of a fence of the truck. See at least [0062] and figure 1, wherein the sub-areas include a segment 8 representing the horizontal plane defined by the upper surface of a vehicle’s trunk.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Tsin’s technique of identifying the upper boundary of a vehicle’s cargo area. It would have been obvious to modify because doing so enables accuracy in applying 3D shape models to 2D image data of consumer vehicles, as recognized by Tsin (see at least [0005]-[0011]).
Regarding claim 15, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 12 as discussed above, and Borthwick additionally teaches wherein the plurality of level sub-areas comprise at least a first- level sub-area, a second-level sub-area, and a third-level sub-area; the first-level sub-area is defined by geometric corner points of a bottom surface of an actual cargo area of the truck; the third-level sub-area is defined by geometric corner points of a top surface of the actual cargo area of the truck. See at least [0067], figure 5, and figures 8A-C, wherein the plurality of sub-areas includes a first sub-area 514, representing a bottom surface of the truck cargo area, a second sub-area 512/510/516, and a third sub-area 518 representing a top surface of the actual cargo area of the truck.
Borthwick remains silent on the second-level sub-area is defined by geometric corner points of an upper boundary of a fence of the truck.
Tsin teaches the second-level sub-area is defined by geometric corner points of an upper boundary of a fence of the truck. See at least [0062] and figure 1, wherein the sub-areas include a segment 8 representing the horizontal plane defined by the upper surface of a vehicle’s trunk.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Tsin’s technique of identifying the upper boundary of a vehicle’s cargo area. It would have been obvious to modify because doing so enables accuracy in applying 3D shape models to 2D image data of consumer vehicles, as recognized by Tsin (see at least [0005]-[0011]).
Claims 8-9 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Borthwick, Rankawat, and Kang as applied to claims above, and further in view of US 20200265220 A1, with an earliest priority date of 02/19/2019, hereinafter “Zhang”.
Regarding claim 8, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 7 as discussed above, and Borthwick additionally teaches wherein the cargo area comprises a plurality of level sub-areas. See at least [0067], figure 5, and figures 8A-C, wherein the plurality of sub-areas includes a first sub-area 514, representing a bottom surface of the truck cargo area, a second sub-area 512/510/516, and a third sub-area 518 representing a top surface of the actual cargo area of the truck.
the obtaining the cargo area map comprising only the cargo area based on the predicted plurality of key points comprises: obtaining the plurality of level sub-area maps each of which comprises only one level sub-area based on the predicted plurality of key points. See at least [0068], [0090], and figure 6, wherein the level sub-area maps are obtained based on the plurality of identified key points.
Borthwick remains silent on and the classification module comprises a plurality of classification units connected in parallel; the number of the plurality of classification units is the same as the number of the plurality of level sub-areas; the predicting the cargo attribute of the cargo area by inputting the cargo area map to the classification module comprises: inputting the plurality of level sub-area maps to the plurality of classification units respectively to predict cargo attributes of the level sub-areas; obtaining the second loss function value based on the prediction result of the classification module comprises: obtaining the second loss function value based on prediction results of the plurality of classification units.
Rankawat teaches obtaining the second loss function value based on the prediction result of the classification module comprises: obtaining the second loss function value based on prediction results of the plurality of classification units. See at least [0126], wherein the second loss function value are obtained based on the identified class labels. Additionally, see at least [0098]-[0099], wherein the identified class labels are provided by a plurality of classification units 130.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of obtaining a second loss function value based on prediction results of the plurality of classification units. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Zhang teaches and the classification module comprises a plurality of classification units connected in parallel; the number of the plurality of classification units is the same as the number of the plurality of level sub-areas; the predicting the cargo attribute of the cargo area by inputting the cargo area map to the classification module comprises: inputting the plurality of level sub-area maps to the plurality of classification units respectively to predict cargo attributes of the level sub-areas. See at least [0032] and figure 2, wherein the feature extraction unit has a corresponding number of units connected in parallel as the number of sub-areas, and classification is performed by inputting each sub-area of the input image to its corresponding feature extraction unit.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Zhang’s technique of a classification module comprising a number of classification units connected in parallel, equivalent to the number of sub-areas, which performs classification by inputting the corresponding sub-area maps to each of the classification units. It would have been obvious to modify because doing so enables training of classification models in view of sparse, incongruent, or poor quality data, as recognized by Zhang (see at least [0003]-[0005]).
Regarding claim 9, Borthwick, Rankawat, Kang, and Zhang in combination teach all of the limitations of claim 8 as discussed above, and Borthwick remains silent on wherein the obtaining the second loss function value based on the prediction results of the plurality of classification units comprises: obtaining a plurality of second sub-loss function values based on the prediction results of the plurality of classification units; and obtaining the second loss function value by performing a weighted summation on the plurality of second sub-loss function values.
Rankawat teaches wherein the obtaining the second loss function value based on the prediction results of the plurality of classification units comprises: obtaining a plurality of second sub-loss function values based on the prediction results of the plurality of classification units; and obtaining the second loss function value by performing a weighted summation on the plurality of second sub-loss function values. See at least [0126] and equation 14, wherein the second loss value is obtained by a summation of the plurality of obtained class labels. Additionally, see at least [0098]-[0099], wherein the identified class labels are provided by a plurality of classification units 130.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of obtaining a second loss function value based on prediction results of the plurality of classification units. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Regarding claim 18, Borthwick, Rankawat, and Kang in combination teach all of the limitations of claim 17 as discussed above, and Borthwick additionally teaches wherein the cargo area comprises a plurality of level sub-areas. See at least [0067], figure 5, and figures 8A-C, wherein the plurality of sub-areas includes a first sub-area 514, representing a bottom surface of the truck cargo area, a second sub-area 512/510/516, and a third sub-area 518 representing a top surface of the actual cargo area of the truck.
the obtaining the cargo area map comprising only the cargo area based on the predicted plurality of key points comprises: obtaining the plurality of level sub-area maps each of which comprises only one level sub-area based on the predicted plurality of key points. See at least [0068], [0090], and figure 6, wherein the level sub-area maps are obtained based on the plurality of identified key points.
Borthwick remains silent on and the classification module comprises a plurality of classification units connected in parallel; the number of the plurality of classification units is the same as the number of the plurality of level sub-areas; the predicting the cargo attribute of the cargo area by inputting the cargo area map to the classification module comprises: inputting the plurality of level sub-area maps to the plurality of classification units respectively to predict cargo attributes of the level sub-areas; obtaining the second loss function value based on the prediction result of the classification module comprises: obtaining the second loss function value based on prediction results of the plurality of classification units.
Rankawat teaches obtaining the second loss function value based on the prediction result of the classification module comprises: obtaining the second loss function value based on prediction results of the plurality of classification units. See at least [0126], wherein the second loss function value are obtained based on the identified class labels. Additionally, see at least [0098]-[0099], wherein the identified class labels are provided by a plurality of classification units 130.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of obtaining a second loss function value based on prediction results of the plurality of classification units. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Zhang teaches and the classification module comprises a plurality of classification units connected in parallel; the number of the plurality of classification units is the same as the number of the plurality of level sub-areas; the predicting the cargo attribute of the cargo area by inputting the cargo area map to the classification module comprises: inputting the plurality of level sub-area maps to the plurality of classification units respectively to predict cargo attributes of the level sub-areas. See at least [0032] and figure 2, wherein the feature extraction unit has a corresponding number of units connected in parallel as the number of sub-areas, and classification is performed by inputting each sub-area of the input image to its corresponding feature extraction unit.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to further modify Borthwick with Zhang’s technique of a classification module comprising a number of classification units connected in parallel, equivalent to the number of sub-areas, which performs classification by inputting the corresponding sub-area maps to each of the classification units. It would have been obvious to modify because doing so enables training of classification models in view of sparse, incongruent, or poor quality data, as recognized by Zhang (see at least [0003]-[0005]).
Regarding claim 19, Borthwick, Rankawat, Kang, and Zhang in combination teach all of the limitations of claim 18 as discussed above, and Borthwick remains silent on wherein the obtaining the second loss function value based on the prediction results of the plurality of classification units comprises: obtaining a plurality of second sub-loss function values based on the prediction results of the plurality of classification units; and obtaining the second loss function value by performing a weighted summation on the plurality of second sub-loss function values.
Rankawat teaches wherein the obtaining the second loss function value based on the prediction results of the plurality of classification units comprises: obtaining a plurality of second sub-loss function values based on the prediction results of the plurality of classification units; and obtaining the second loss function value by performing a weighted summation on the plurality of second sub-loss function values. See at least [0126] and equation 14, wherein the second loss value is obtained by a summation of the plurality of obtained class labels. Additionally, see at least [0098]-[0099], wherein the identified class labels are provided by a plurality of classification units 130.
One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Borthwick with Rankawat’s technique of obtaining a second loss function value based on prediction results of the plurality of classification units. It would have been obvious to modify because doing so enables vehicles to perform safe navigation while taking into consideration different boundaries, as recognized by Rankawat (see at least [0006]-[0009]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Selena M. Jin whose telephone number is (408)918-7588. The examiner can normally be reached Monday - Thursday and alternate Fridays, 7:30-4:30 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.M.J./ Examiner, Art Unit 3667
/FARIS S ALMATRAHI/ Supervisory Patent Examiner, Art Unit 3667