Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 04/22/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description:
#500 in ¶0075, line 1.
#600 in ¶0082, line 1.
#601 in ¶0082, line 1.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: #602 in Figure 6. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
Spec, ¶0022, lines 7-8, "additional models may be added the hierarchical structure" appears to be a typo. Examiner suggests "additional models may be added to the hierarchical structure".
Spec ¶0044, line 5, “determine with classification model” appears to be a typo. Examiner suggests “determine which classification model”.
Spec, ¶0045, line 6, “other objections” appears to be a typo. Examiner suggests “other objects”.
Appropriate correction is required.
Claim Objections
Claim 15 is objected to because of the following informalities:
line 3, "routing the retrieve image” appears to be a typo. Examiner suggests “routing the retrieved image”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 11, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 11, and 20 recite the following, with claim 1 being exemplary:
“(a) receive a plurality of images; and for each image of the plurality of images, the at least one processor is programmed to: (b) retrieve an image of the plurality of images; (c) execute a hierarchy of models with the retrieved image as input; (d) output classification information for the retrieved image based upon the execution; and (e) associate the classification information with the retrieved image.”
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that the independent claim 1 is directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories? YES. Independent claims 1, 11, and 20 are directed to a system, method, and non-transitory computer readable medium, respectively.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? YES. Independent claims 1, 11, and 20 are directed towards a mathematical concept (i.e. abstract idea).
Regarding claims 1, 11, and 20, limitation (c) recites “execute a hierarchy of models”, which under broadest reasonable interpretation, means a series of machine learning models. Machine learning models fall under mathematical algorithms and are thus mathematical concepts (see MPEP §2106.04(a)(2)(I)).
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Independent claims 1, 11, and 20 do not recite additional elements that integrate the judicial exception into a practical application.
Regarding claims 1, 11, and 20, limitations (a), (b), (d), and (e) are additional elements, which while not necessarily being abstract ideas, are insignificant extra-solution activity since they are merely data gathering and data output (see MPEP §2106.05(g)). Claims 1, 11, and 20 further recite additional elements “processor” and “memory device”. Claim 20 further recites “non-transitory computer readable media”. These additional elements are not sufficient to recite a practical application of the abstract ideas recited in claims 1, 11, and 20 as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP §2106.05(f)).
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO. Independent claims 1, 11, 20 do not recite additional elements that amount to significantly more than the judicial exception.
Regarding claims 1, 11, and 20, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional elements from claim 1 do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).
Therefore, independent claims 1, 11, and 20 are directed towards an abstract idea without a practical application or significantly more.
Regarding claims 2 and 12, with claim 2 being exemplary: the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical concept. The limitation: where in the at least one processor is further programmed to generate a report for the plurality of images based upon the plurality of associated classification information falls under data output (see MPEP §2106.05(g)) as the report would be considered an output of data.
Regarding claims 3 and 13, with claim 3 being exemplary: the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical concept. The limitation: wherein the hierarchy of models includes a plurality of classification models, each trained to identify one or more items in an image, falls under a mathematical concept (see MPEP §2106.04(a)(2)(I)) as machine learning models and training automated object classification in images are mathematical algorithms.
Regarding claims 4 and 14, with claim 4 being exemplary: the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical concept. The limitation: wherein at least one of the plurality of classification models is trained to identify a material of an item in the image, falls under a mathematical concept (see MPEP §2106.04(a)(2)(I)) as machine learning models and training automated classification of images are mathematical algorithms.
Regarding claims 5-7 and 15-17, with claim 5 being exemplary: the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical concept. The claim recites the following limitations: (a) route the retrieved image to a first classification model of the plurality of classification models in the hierarchy of models; (b) execute the first classification model using the retrieved image as the input; and (c) receive one or more classifications from the first classification model based upon the retrieved image. Limitations (a) and (c) fall under data input and data output (see MPEP §2106.05(g)). Limitation (b) falls under a mathematical concept (see MPEP §2106.04(a)(2)(I)) as machine learning models are mathematical algorithms. Claims 6, 7, 16, and 17, further recite determine a second/third classification model of the plurality of classification models in the hierarchy of models based upon the one or more classifications from the first/second classification model which falls under a mental process (see MPEP §(a)(2)(III)) as a human could select a further model based on the output of a previous model.
Regarding claims 8-10, 18, and 19: the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical concept. The limitations: wherein the plurality of images are of a property/include inside and outside images of at least one building on the property/are of an object to be insured all fall under selecting a data type or source (see MPEP §2106.05(g)).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5, 6, 11-13, 15, 16, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sandoval et al. (US Patent No. 12,288,399) (hereafter, “Sandoval”).
Regarding claim 1, Sandoval discloses a computer system comprising at least one processor in communication with at least one memory device (Col. 4, lines 24-35 processors 52 and data storage 56), wherein the at least one processor programmed (Col. 15 lines 59-63, processors suitable for the execution of a computer program include, by way of example, special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both) to: receive a plurality of images (Col. 4, lines 50-56, the camera 110, …, can collect images of the environment in the vicinity of the rig 105. The camera 110 can capture images including, in this example, a distressed child 105A, a suspicious party 105B near the distressed child 105A, and the license plate on a vehicle 107; Col. 6, lines 18-20, Returning to the example 100 illustrated in FIG. 1, image recognition models 120 in the edge platform 101 can process the image data produced by the camera 110. Examiner interprets the images being processed to indicate that they were “received”); and for each image of the plurality of images, the at least one processor is programmed to: retrieve an image of the plurality of images (Col.6 and Col. 7, provided data, received from edge platform 101, Col. 8 lines 59-61 Sensor data classification models 220 can also provide additional information about the sensor data and include that data in the classification output data 222; processing of individual images from a plurality of images is an inherent feature of the system described in Sandoval); execute a hierarchy of models with the retrieved image as input (Col. 7, line 67, examples of sensors 205 can include cameras; Col. 8, lines 22-26, In addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated; Col. 8, lines 35-37, The model management engine 210 can accept sensor data 207 from sensors 205 and direct the sensor data 207 to the appropriate, active sensor data classification models 220; Examiner interprets data from cameras as “image” and the activation of one model depending on the outcome of a first model as “a hierarchy of models”); output classification information for the retrieved image based upon the execution (Col. 8, lines 53-55, each sensor data classification model 220 can be configured to produce a particular type classification output data 222); and associate the classification information with the retrieved image (Col. 8 line 67 – Col. 9 line 1-3, the classification output data 222 can further include an indication of the sensor data 207, or a subset of the sensor data 207, used to produce the predicted classification. Examiner interprets indicating sensor data used to produce the classification as “associate”).
Regarding claim 2, Sandoval discloses the computer system of Claim 1, where in the at least one processor is further programmed to generate a report (Col 4, lines 27-28, process the data to make it recordable, reportable) for the plurality of images based upon the plurality of associated classification information (Col 10, lines 34-37, the interaction engine 280 can provide results 285, … composite data (e.g., data aggregated from multiple detection data 232).
Regarding claim 3, Sandoval discloses the computer system of Claim 1, wherein the hierarchy of models (Col 8, lines 22-26, in addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated; Examiner interprets the activation of one model depending on the outcome of a first model as “a hierarchy of models”) includes a plurality of classification models (Col 8, lines 14-15, such as sensor data classification models 220A, 220B, 220C), each trained to identify one or more items in an image (Col 6, lines 21-25, the classification output data 125 can include indications of detected elements of the image data. The classification output data 125 can include an indication that a license plate, tail number or face was detected. Examiner considers license plates, tail numbers, and faces as “items”).
Regarding claim 5, Sandoval discloses the computer system of Claim 3, wherein the at least one processor is further programmed to: route the retrieved image (Col. 7, line 67, examples of sensors 205 can include cameras; Col 8, lines 38-40, the model management engine 210 can direct the sensor data 207 to the 40 active sensor data classification model. Examiner considers sensor data as “the retrieved image”) to a first classification model (Col 8, lines 25-26, In addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated. Examiner considers the model that detects the license plate as the “first classification model”) of the plurality of classification models (Col 8, lines 14-15, such as sensor data classification models 220A, 220B, 220C) in the hierarchy of models (Col 8, lines 22-26, in addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated; Examiner interprets the activation of one model depending on the outcome of a first model as “a hierarchy of models”); execute the first classification model using the retrieved image as the input (Col 8, 25-26, for example, if one model detects a license plate. For the model engine to be aware a license plate is detected, the license plate model must be executed); and receive one or more classifications from the first classification model based upon the retrieved image (Col 8, lines 22-26, in addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated. Examiner considers the license plate model as the “first classification model”. For the model engine to be aware a license plate is detected, the license plate model must be executed, and output the classification).
Regarding claim 6, Sandoval discloses the computer system of Claim 5, wherein the at least one processor is further programmed to: determine a second classification model of the plurality of classification models in the hierarchy of models based upon the one or more classifications from the first classification model (Col 8, 25-26, for example, if one model detects a license plate, a face detection model might be activated. Examiner interprets the face detection model as the “second classification model” and the license plate model as the “first classification model”); route the retrieved image to the second classification model (Col 8, lines 38-40, the model management engine 210 can direct the sensor data 207 to the 40 active sensor data classification model); execute the second classification model using the retrieved image as the input (Col 8, 25-26, for example, if one model detects a license plate, a face detection model might be activated); and receive one or more additional classifications from the second classification model based upon the retrieved image (Col 9, lines 45-46, the augmentation engine 230 can accept sensor data 207, classification output data 222).
Regarding claim 11, Sandoval discloses a computer-implemented method performed by a hierarchical model image analysis (HMIA) (Examiner is considering “hierarchical model image analysis (HMIA)” as equivalent to a hierarchy of models as stated in ¶0044, lines 2-6 of the specification of the instant application) computer device including at least one processor in communication with at least one memory device (Col. 4, lines 24-35 processors 52 and data storage 56), the method comprising: receiving a plurality of images (Col. 4, lines 50-56, the camera 110, …, can collect images of the environment in the vicinity of the rig 105. The camera 110 can capture images including, in this example, a distressed child 105A, a suspicious party 105B near the distressed child 105A, and the license plate on a vehicle 107; Col. 6, lines 18-20, Returning to the example 100 illustrated in FIG. 1, image recognition models 120 in the edge platform 101 can process the image data produced by the camera 110. Examiner interprets the images being processed to indicate that they were “received”); and for each image of the plurality of images, the method further comprises: retrieving an image of the plurality of images (Col.6 and Col. 7, provided data, received from edge platform 101, Col. 8 lines 59-61 Sensor data classification models 220 can also provide additional information about the sensor data and include that data in the classification output data 222; processing of individual images from a plurality of images is an inherent feature of the system described in Sandoval); executing a hierarchy of models with the retrieved image as input (Col. 7, line 67, examples of sensors 205 can include cameras; Col. 8, lines 22-26, In addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated; Col. 8, lines 35-37, The model management engine 210 can accept sensor data 207 from sensors 205 and direct the sensor data 207 to the appropriate, active sensor data classification models 220; Examiner interprets data from cameras as “image” and the activation of one model depending on the outcome of a first model as “a hierarchy of models”); outputting classification information for the retrieved image based upon the execution (Col. 8, lines 53-55, each sensor data classification model 220 can be configured to produce a particular type classification output data 222); and associating the classification information with the retrieved image (Col. 8 line 67 – Col. 9 line 1-3, the classification output data 222 can further include an indication of the sensor data 207, or a subset of the sensor data 207, used to produce the predicted classification. Examiner interprets indicating sensor data used to produce the classification as “associate”).
Regarding claim 12, (drawn to a computer-implemented method) the proposed reference of Sandoval explained in the rejection of claim 2 anticipates the steps of claim 12, because these steps occur in the operation of the system claim discussed above. Thus, the arguments similar to that presented above for claim 2 are equally applicable to claim 12.
Regarding claim 13, (drawn to a computer-implemented method) the proposed reference of Sandoval explained in the rejection of claim 3 anticipates the steps of claim 13, because these steps occur in the operation of the system claim discussed above. Thus, the arguments similar to that presented above for claim 3 are equally applicable to claim 13.
Regarding claim 15, (drawn to a computer-implemented method) the proposed reference of Sandoval explained in the rejection of claim 5 anticipates the steps of claim 15, because these steps occur in the operation of the system claim discussed above. Thus, the arguments similar to that presented above for claim 5 are equally applicable to claim 15.
Regarding claim 16, (drawn to a computer-implemented method) the proposed reference of Sandoval explained in the rejection of claim 5 anticipates the steps of claim 16, because these steps occur in the operation of the system claim discussed above. Thus, the arguments similar to that presented above for claim 5 are equally applicable to claim 16.
Regarding claim 20, Sandoval discloses at least one non-transitory computer-readable media having computer-executable instructions embodied thereon (Col. 10, lines 57-59, operations of the process 300 can also be implemented as instructions stored on one or more computer readable media which may be non-transitory), wherein when executed by a computing device including at least one processor in communication with at least one memory device (Col. 4, lines 24-35 processors 52 and data storage 56), the computer-executable instructions cause the at least one processor to: receive a plurality of images (Col. 4, lines 50-56, the camera 110, …, can collect images of the environment in the vicinity of the rig 105. The camera 110 can capture images including, in this example, a distressed child 105A, a suspicious party 105B near the distressed child 105A, and the license plate on a vehicle 107; Col. 6, lines 18-20, Returning to the example 100 illustrated in FIG. 1, image recognition models 120 in the edge platform 101 can process the image data produced by the camera 110. Examiner interprets the images being processed to indicate that they were “received”); and for each image of the plurality of images, the at least one processor is programmed to: retrieve an image of the plurality of images (Col.6 and Col. 7, provided data, received from edge platform 101, Col. 8 lines 59-61 Sensor data classification models 220 can also provide additional information about the sensor data and include that data in the classification output data 222; processing of individual images from a plurality of images is an inherent feature of the system described in Sandoval); execute a hierarchy of models with the retrieved image as input (Col. 7, line 67, examples of sensors 205 can include cameras; Col. 8, lines 22-26, In addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated; Col. 8, lines 35-37, The model management engine 210 can accept sensor data 207 from sensors 205 and direct the sensor data 207 to the appropriate, active sensor data classification models 220; Examiner interprets data from cameras as “image” and the activation of one model depending on the outcome of a first model as “a hierarchy of models”); output classification information for the retrieved image based upon the execution (Col. 8, lines 53-55, each sensor data classification model 220 can be configured to produce a particular type classification output data 222); and associate the classification information with the retrieved image (Col. 8 line 67 – Col. 9 line 1-3, the classification output data 222 can further include an indication of the sensor data 207, or a subset of the sensor data 207, used to produce the predicted classification. Examiner interprets indicating sensor data used to produce the classification as “associate”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sandoval et al. (US Patent No. 12,288,399) (hereafter, “Sandoval”) in view of Frei et al. (US Patent Application Publication No. 2023/0306539) (hereafter, “Frei”).
Regarding claim 4, Sandoval discloses the computer system of Claim 3, wherein at least one of the plurality of classification models (Col 8, lines 14-15, such as sensor data classification models 220A, 220B, 220C) is [trained to identify a material of an item in the image].
Sandoval fails to disclose trained to identify a material of an item in the image.
However, Frei teaches trained to identify a material of an item in the image (¶0005, the present disclosure relates to computer vision systems… and material recognition (e.g., wood, ceramic, laminate, or the like)).
Both Sandoval and Frei are analogous to the claimed invention because they are both in the field of using machine learning for image classification. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the material classification model of Frei into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been a simple substitution of a materials classifier for an item classifier, as suggested by the variety of engines or classifiers listed by Frei at ¶0021, a computer vision feature segmentation and material detection engine 18b, a computer vision content feature detection engine 18c, a computer vision hazard detection 18d, a computer vision damage detection engine.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Frei.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Frei to obtain the invention as specified in claim 4.
Regarding claim 14, Sandoval discloses the computer-implemented method of Claim 13, wherein at least one of the plurality of classification models (Col 8, lines 14-15, such as sensor data classification models 220A, 220B, 220C) is [trained to identify a material of an item in the image].
Sandoval fails to disclose trained to identify a material of an item in the image.
However, Frei teaches trained to identify a material of an item in the image (¶0005, the present disclosure relates to computer vision systems… and material recognition (e.g., wood, ceramic, laminate, or the like)).
Both Sandoval and Frei are analogous to the claimed invention because they are both in the field of using machine learning for image classification. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the material classification model of Frei into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been a simple substitution a materials classifier for an item classifier, as suggested by the variety of engines or classifiers listed by Frei at ¶0021, a computer vision feature segmentation and material detection engine 18b, a computer vision content feature detection engine 18c, a computer vision hazard detection 18d, a computer vision damage detection engine.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Frei.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Frei to obtain the invention as specified in claim 14.
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Sandoval et al. (US Patent No. 12,288,399) (hereafter, “Sandoval”) in view of Bufi (US Patent Application Publication No. 2024/0087303).
Regarding claim 7, Sandoval discloses the computer system of Claim 6, wherein the at least one processor is further programmed to: [determine a third classification model] of the plurality of classification models (Col. 8, lines 14-15, such as sensor data classification models 220A, 220B, 220C) in the hierarchy of models (Col. 8, lines 22-26, in addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated; Examiner interprets the activation of one model depending on the outcome of a first model as “a hierarchy of models”)) [based upon the one or more additional classifications from the second classification model]; route the retrieved image to the [third] classification model (Col. 8, lines 38-40, the model management engine 210 can direct the sensor data 207 to the 40 active sensor data classification model); execute the [third] classification model using the retrieved image as the input (Col. 8, lines 22-24, in addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated); and receive one or more further classifications from the [third] classification model based upon the retrieved image (Col. 9, lines 45-46, the augmentation engine 230 can accept sensor data 207, classification output data 222; Examiner notes that while Sandoval does not explicitly disclose a third model, the actions of routing data, executing, and receiving output are understood to be generally applicable to a third model by a person of ordinary skill in the art).
Sandoval does not disclose determine a third classification model based upon the one or more additional classifications from the second classification model.
However, Bufi teaches determine a third classification model based upon the one or more additional classifications from the second classification model (¶0125, according to second model output data 322b, the model trigger determination module 316 may determine that the inspection image data 320 should subsequently be provided to third model 312c).
Both Sandoval and Bufi are analogous to the claimed invention because they are both in the field of using hierarchical machine learning models. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the third model of Frei into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been to prevent model drift by implementing specialized models as suggested by Bufi at ¶0119 this approach advantageously ensures that each of the models 312 is able to perform specialized analysis without “drifting” from that functionality by accommodating further tasks.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Bufi.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Bufi to obtain the invention as specified in claim 7.
Regarding claim 17, Sandoval discloses the computer-implemented method of Claim 16, wherein the at least one processor is further programmed to: [determine a third classification model] of the plurality of classification models (Col. 8, lines 14-15, such as sensor data classification models 220A, 220B, 220C) in the hierarchy of models (Col. 8, lines 22-26, in addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated. For example, if one model detects a license plate, a face detection model might be activated; Examiner interprets the activation of one model depending on the outcome of a first model as “a hierarchy of models”)) [based upon the one or more additional classifications from the second classification model]; route the retrieved image to the [third] classification model (Col. 8, lines 38-40, the model management engine 210 can direct the sensor data 207 to the 40 active sensor data classification model); execute the [third] classification model using the retrieved image as the input (Col. 8, lines 22-24, in addition, the model management engine 210 can use the output of models to determine which sensor data classification models 220 are to be activated); and receive one or more further classifications from the [third] classification model based upon the retrieved image (Col. 9, lines 45-46, the augmentation engine 230 can accept sensor data 207, classification output data 222; Examiner notes that while Sandoval does not explicitly disclose a third model, the actions of routing data, executing, and receiving output are understood to be generally applicable to a third model by a person of ordinary skill in the art).
Sandoval does not disclose determine a third classification model based upon the one or more additional classifications from the second classification model.
However, Bufi teaches determine a third classification model based upon the one or more additional classifications from the second classification model (¶0125, according to second model output data 322b, the model trigger determination module 316 may determine that the inspection image data 320 should subsequently be provided to third model 312c).
Both Sandoval and Bufi are analogous to the claimed invention because they are both in the field of using hierarchical machine learning models. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the third model of Frei into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been to prevent model drift by implementing specialized models as suggested by Bufi at ¶0119 this approach advantageously ensures that each of the models 312 is able to perform specialized analysis without “drifting” from that functionality by accommodating further tasks.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Bufi.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Bufi to obtain the invention as specified in claim 17.
Claims 8, 9, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sandoval et al. (US Patent No. 12,288,399) (hereafter, “Sandoval”) in view of Wen et al. (US Patent No. 11,308,365) (hereafter, “Wen”).
Regarding claim 8, Sandoval discloses the computer system of Claim 1.
Sandoval fails to disclose wherein the plurality of images are of a property.
However, Wen discloses wherein the plurality of images are of a property (Col. 3, lines 36-38, for example, remote system 106 may be configured to manage listings and images of items, such as real [estate] property offered for sale or rent. Col. 9, lines 11-13, illustratively, the image may be an image of a hotel room to be classified by the image classification system 118).
Both Sandoval and Wen are analogous to the claimed invention because Sandoval is in the field of hierarchical image classifiers and Wen serves the application of image classification to property. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the images of property from Wen into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been simple substitution (see MPEP §2143(I)(A)) of a set of images of property for a set of police camera images. One of ordinary skill in the art could have performed the substitution with predictable results.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Wen.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Wen to obtain the invention as specified in claim 8.
Regarding claim 9, Sandoval in view of Wen discloses the computer system of Claim 8.
Sandoval fails to disclose wherein the plurality of images include inside and outside images of at least one building on the property.
However, Wen discloses wherein the plurality of images include inside (Col. 9, lines 11-13, illustratively, the image may be an image of a hotel room to be classified by the image classification system 118. Examiner interprets the hotel room to be “inside”) and outside images (Col. 10, lines 4-7, for example, subcategory labels 654 may include … outdoor pool, etc.). Since the model can classify outdoor pools, it is implied that images of outdoor pools are provided and thus are interpreted as “outside”) of at least one building on the property (Examiner is interpreting the hotel as the “building on the property”).
Both Sandoval and Wen are analogous to the claimed invention because Sandoval is in the field of hierarchical image classifiers and Wen serves the application of image classification to property. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the images of the inside and outside of property from Wen into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been simple substitution of a set of images of the inside and outside of a building property for a set of police camera images. One of ordinary skill in the art could have performed the substitution with predictable results.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Wen.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Wen to obtain the invention as specified in claims 9.
Regarding claim 18, Sandoval discloses the computer-implemented method of Claim 11.
Sandoval fails to disclose wherein the plurality of images are of a property.
However, Wen discloses wherein the plurality of images are of a property (Col. 3, lines 36-38, for example, remote system 106 may be configured to manage listings and images of items, such as real [estate] property offered for sale or rent. Col. 9, lines 11-13, illustratively, the image may be an image of a hotel room to be classified by the image classification system 118).
Both Sandoval and Wen are analogous to the claimed invention because Sandoval is in the field of hierarchical image classifiers and Wen serves the application of image classification to property. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the images of property from Wen into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been simple substitution of a set of images of property for a set of police camera images. One of ordinary skill in the art could have performed the substitution with predictable results.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Wen.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Wen to obtain the invention as specified in claim 18.
Regarding claim 19, Sandoval in view of Wen discloses the computer-implemented method of Claim 18.
Sandoval fails to disclose wherein the plurality of images include inside and outside images of at least one building on the property.
However, Wen discloses wherein the plurality of images include inside (Col. 9, lines 11-13, illustratively, the image may be an image of a hotel room to be classified by the image classification system 118. Examiner interprets the hotel room to be “inside”) and outside images (Col. 10, lines 4-7, for example, subcategory labels 654 may include … outdoor pool, etc.). Since the model can classify outdoor pools, it is implied that images of outdoor pools are provided and thus are interpreted as “outside”) of at least one building on the property (Examiner is interpreting the hotel as the “building on the property”).
Both Sandoval and Wen are analogous to the claimed invention because Sandoval is in the field of hierarchical image classifiers and Wen serves the application of image classification to property. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the images of the inside and outside of property from Wen into the hierarchical model of Sandoval. The suggestion/motivation for doing so would have been simple substitution of a set of images of the inside and outside of a building property for a set of police camera images. One of ordinary skill in the art could have performed the substitution with predictable results.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Wen.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Wen to obtain the invention as specified in claim 19.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Sandoval et al. (US Patent No. 12,288,399) (hereafter, “Sandoval”) in view of Bokshi-Drotar et al. (US Patent No. 12,026,786) (hereafter, “Bokshi-Drotar”).
Regarding claim 10, Sandoval discloses the computer system of Claim 1.
Sandoval fails to disclose wherein the plurality of images are of an object to be insured.
However, Bokshi-Drotar discloses wherein the plurality of images are of an object to be insured (Col. 1, lines 61-62, accessing digital image data depicting a roof of the property. Examiner is interpreting a roof to be an “object to be insured”).
Both Sandoval and Bokshi-Drotar are analogous to the claimed invention because Sandoval is in the field of hierarchical models and Bokshi-Drotar serves the application of image classification to objects to be insured. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the images of roofs from Bokshi-Drotar into hierarchical model of Sandoval. The suggestion/motivation for doing so would have been simple substitution of a set of images of roofs for a set of police camera images. One of ordinary skill in the art could have performed the substitution with predictable results.
This method of improving Sandoval was within the ordinary ability of one of ordinary skill in the art based on the teachings of Bokshi-Drotar.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Sandoval with the teachings of Bokshi-Drotar to obtain the invention as specified in claim 10.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Tomotaki et al. (US Patent No. 11,842,254) discloses a hierarchical image classifier (Col. 4, lines 15-17, Object recognition system 115 includes a hierarchical domain-based inference model architecture).
Sivakumar et al. (European Patent No. EP3477549B1) discloses a multi-hierarchy classification system (¶0048, In an example embodiment, a divide and conquer strategy is used to build a large number of smaller classification models (model lake) and use them via a smart model invoked in a complex hierarchical setup. This model approach first analyzes the extracted shape to decide the product category that the item falls into (e.g., bottles, cans, packets, tetra pack cuboids, etc.). This data is then used to interact with the correct models in parallel to identify products at a Stock Keeping Unit (SKU) level; Claim 1, passing the candidate image and the one or more tags to a multi-hierarchy classification system).
Tasdizen and Seyedhosseini (US Patent Application Publication No. 2017/0228616) discloses a multilevel image classifier (¶0051, The bottom-up classification module 220 may comprise a set of L classifiers 222[1]-222[L], each corresponding to a respective level of an image resolution hierarchy. The first-level classifier 222[1] within the hierarchy may be configured to process full-resolution images, the second level classifier 222[2] within the hierarchy may be configured to process lower-resolution images (e.g., downscaled image data), and so on. The Lth classifier 222[L] may be configured to process lowest-resolution images within the hierarchy).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMAO DING whose telephone number is (571)272-7237. The examiner can normally be reached Mon-Fri 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/X.D./ Examiner, Art Unit 2676
/Henok Shiferaw/ Supervisory Patent Examiner, Art Unit 2676