DETAILED ACTION
This is in response to the applicant’s communication filed on 8/21/25 wherein:
Claims 1-9, 11-19, 21, and 22 are currently pending; and
claims 10 and 20 are cancelled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9, 11-19, 21, and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claim 1 recites a device and therefore, falls into a statutory category. Similar independent claim 11 recites a method, and therefore, also fall into a statutory category.
Step 2A – Prong 1 (Is a Judicial Exception Recited?): The underlined limitations of
a processor;
memory storing instructions that, when executed by the processor, cause the device to:
receive a first image uploaded by a user via a communication interface, the first image comprising a captured image of a part of a vehicle;
pre-process, based on features of the part of the vehicle, the first image to output a second image;
provide, as input to a trained neural network model, the second image and at least one parameter associated with the pre-processing;
receive, as output from the trained neural network model, output data indicating information associated with a recognized part of the vehicle, wherein the trained neural network model is updated by extracting features from the second image using an encoder of the neural network model, and generating, based on the extracted features and the at least one parameter, he information associated with the recognized part of the vehicle using a decoder of the neural network model, the neural network model having a skip connection between the encoder and the decoder;
store the information associated with the recognized part of the vehicle as vehicle part information;
receive accident data representing an occurrence of an accident as an input from an accident maintenance data source;
based on the accident data, match the information associated with the recognized part of the vehicle to a required part of the vehicle; and
output a signal indicating information of the required part of the vehicle
are processes that, under their broadest reasonable interpretation, are considered certain methods of organizing human activity – commercial or legal interactions (including agreements in the form of contracts and marketing or sales activities or behaviors) and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The Specification states that the damage information may be “used to estimate the repair and/or replacement cost.” Specification ¶12. Accordingly, the claim recites an abstract idea.
Step 2A-Prong 2 (Is the Exception Integrated into a Practical Application?): This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a processor;
a memory storing instructions, and an accident maintenance data source. The processor, memory storing instructions, and accident maintenance data source are recited at a high-level of generality (i.e., as a generic processing device performing generic computer functions), such that they amount to no more than mere instructions to apply the exception using a generic computer component. As to the trained neural network model, encoder of the neural network, and decoder of the neural network, MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Here, the additional elements are invoked merely as a tool to perform existing processes (“receive, as output from the trained neural network model, output data indicating information associated with a recognized part of the vehicle, wherein the trained neural network model is updated by extracting features from the second image using an encoder of the neural network model, and generating, based on the extracted features and the at least one parameter, the information associated with the recognized part of the vehicle using a decoder of the neural network model, the neural network model having a skip connection between the encoder and the decoder”). See MPEP 2106.05(f).
Additionally, the receiving and storing limitations may be considered insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea when considered both individually and as a whole. The claim is directed to an abstract idea.
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claim is directed to the judicial exception.
Step 2B (Does the claim recite additional elements that amount to Significantly More than the Judicial Exception?): The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer to perform the claimed steps amount to no more than mere instructions to apply the exception using a generic computer component. Further, the claims simply append well-understood, routine, and conventional (WURC) activities previously known to the industry, specified at a high level of generality, to the judicial exception, in the form of the extra-solution activity. The courts have recognized that the computer functions claimed (the receiving and storing limitations) as WURC (see 2106.05(d), identifying receiving or transmitting data over a network as WURC, as recognized by Symantec, identifying storing information in memory as WURC, as recognized by Versata). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible, as when viewed individually, and as a whole, nothing in the claim adds significantly more to the abstract idea.
Dependent claims 1-9, 11-19, 21, and 22 merely recite further additional embellishments of the abstract idea of independent claims 1 and 11 as discussed above with respect to integration of the abstract idea into a practical application, and these features only serve to further limit the abstract idea of independent claims 1 and 11, however none of the dependent claims recite an improvement to a technology or technical field or provide any meaningful limits.
In light of the detailed explanation and evidence provided above, the Examiner asserts that the claimed invention, when the limitations are considered individually and as whole, is directed towards an abstract idea.
Notice
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-11, 12, and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Taliwal et al. (US 20170293894), in view of Nelson et al. (US 10949814), and further in view of Li et al. (US 20180260793).
Referring to claim 1:
Taliwal discloses a device comprising: a processor; memory storing instructions, when executed by the processor {Taliwal [0006][0045][0046] [0056][0060]; Server 102 includes one or more processors 302, memory 304, network interface(s) 306, storage device(s) 308, and software modules [0056]}, cause the device to:
receive a first image uploaded by a user via a communication interface, the first image comprising a captured image of a part of a vehicle {Taliwal [0061][0129]; a server, such as server 102, receives one or more images of a damaged vehicle from a client device [0061]};
pre-process, based on features of the part of the vehicle, the first image to output a second image {Taliwal [0035][0062] [0067][0096]; the server performs image processing on the one or more images to detect external damage of the vehicle. As described in greater detail in FIG. 5, performing image processing on the one or more images includes: image cleaning to remove artifacts such as background and specular reflection, image alignment to an undamaged version of the vehicle, image segmentation into vehicle parts, and damage assessment, including edge distribution, texture comparison, and spatial correlation detection [0062]};
provide, as input to a trained neural network model, the second image and at least one parameter associated with the pre-processing {Taliwal [0037] [0096]-[0098][0121][0141]-[0143][0190][0197]; At step 506 (i.e., image segmentation), the cleaned image of the damaged vehicle is segmented into vehicle parts, i.e., the boundaries of the vehicle parts are determined and drawn [0096] and train a CNN to output a complete list of damaged parts when presented with the set of images associated to an auto claim. This includes both internal and external parts [0121] and machine learning system uses a machine learning method called Convolutional Neural Network (CNN) to detect external damage [0141] where the boundaries of the vehicle parts are the at least one parameter};
receive, as output from the trained neural network model, output data indicating information associated with a recognized part of the vehicle {Taliwal [0118]-[0121][0139][0145]; train a CNN to output a complete list of damaged parts when presented with the set of images associated to an auto claim. This includes both internal and external parts . . . The performance of the CNN can be made more robust when it is presented with the output of the external damage detection system described above [0121]}, and generating, based on the extracted features and the at least one parameter, the information associated with the recognized part of the vehicle {Taliwal [0118]-[0121] [0141]-[0143]; once external damage is detected at step 404, internal damage can be inferred at step 406 [0120]};
store the information associated with the recognized part of the vehicle as vehicle part information {Taliwal [0034][0036][0045][0047][0121][0122]; where, by the computer determining and providing the list of damaged parts, the list is stored in at least short term memory};
receive accident data representing an occurrence of an accident as an input from an accident maintenance data source {Taliwal [0031][0033] [0121]; Some embodiments take a large number (e.g., on the order of thousands) of auto claims that contains images of the damaged vehicles and the corresponding appraisals of damaged parts, as found by auto repair shops for repair purposes. Taken together, these historical claims provide enough evidence to establish a high degree of correlation between damage visible in the images and the entire list of damaged parts, both internal and external. In one embodiment, a Convolutional Neural Network (CNN) is trained to learn this correlation [0121]};
based on the accident data, match the information associated with the recognized part of the vehicle to a required part of the vehicle {Taliwal [0037][0118]-[0121]; A comprehensive damaged parts list is then generated to prepare an estimate of the cost required to repair the vehicle by looking up in a parts database for parts and labor cost [0037] and these historical claims provide enough evidence to establish a high degree of correlation between damage visible in the images and the entire list of damaged parts, both internal and external [0121]}; and
output a signal indicating information of the required part of the vehicle {Taliwal [0029][0037][0044][0065][0122]; The server 102 processes the images to estimate damage and repair costs. The estimates are transmitted over network connection 112 to the adjust computer device 106 for approval or adjustment [0044] and some embodiments provide the damaged parts list to a database of parts and labor costs [0122]}.
Taliwal discloses using images of a damaged vehicle to detect damaged parts of the vehicle (abstract). Taliwal does not disclose wherein the trained neural network model is updated by extracting features from the second image.
However, Nelson discloses a similar system for intelligent vehicle repair estimation (abstract). Nelson discloses wherein the trained neural network model is updated by extracting features from the second image {Nelson 10:52-11:18; In some embodiments, the parts prediction model 30 is re-trained or updated to account for additional data (e.g., additional image attribute data and other data related to repairing damaged vehicles) that has been added to the historical data store 32 [11:5-18]}.
It would have been obvious for a person of ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to modify the system disclosed in Taliwal to incorporate updating the trained neural network model as taught by Nelson because this would provide a manner for accounting for additional data added to the data store (Nelson 10:52-11:18), thus aiding the user by including new parts/vehicles.
Taliwal, as modified by Nelson, discloses using images of a damaged vehicle to detect damaged parts of the vehicle (abstract). Taliwal, as modified by Nelson, does not disclose using an encoder of the neural network model, using a decoder of the neural network model, the neural network model having a skip connection between the encoder and the decoder.
However, Li discloses a similar system for automatic assessment of damage and repair costs for vehicles (abstract). Li discloses using an encoder of the neural network model, using a decoder of the neural network model, the neural network model having a skip connection between the encoder and the decoder {Li [0320]; Several skip connection layers are bridged between the encoder network and the decoder network, to merge the spatially rich information from low-level features in the encoder network, with the high-level object knowledge in the decoder network [0320]}.
It would have been obvious for a person of ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to modify the system disclosed in Taliwal and Nelson to incorporate an encoder and a decoder, the encoder and decoder having a skip connection as taught by Li because this would provide a manner for merging the spatially rich information in the encoder with the object knowledge in the decoder (Li [0320]), thus aiding the user by improving the output.
Referring to claim 2:
Taliwal, as modified by Nelson and Li, discloses recognize a position of the recognized part in the first image {Taliwal [0098]; The initial position of the part is located by simply overlaying the reference image onto the damaged image and projecting the boundary of the part on to the damaged image [0098]}, and crop, based on the position of the recognized part, the first image to include an entirety of the recognized part {Taliwal [0062][0096]-[0099][0118]; image segmentation into vehicle parts [0062]}.
Referring to claim 4:
Taliwal, as modified by Nelson and Li, discloses wherein the instructions, when executed by the processor, cause the device to: change a pixel value of the first image to a new value {Taliwal [0071][0089]; The binary label can be taken to be 1 for the foreground and −1 for the background. Once all of the pixels in the image have been assigned a binary label properly, the pixels labeled as background can be removed achieving segmentation of the background [0071] and Pixels whose intensity values have reached a maximum in either of the three color channels (i.e., red (R), green (G), and blue (B)) are assumed to be “saturated” due to strong incident light, and are re-assigned color values of nearby pixels that are of the same color, but unsaturated [0089]}.
Referring to claim 5:
Taliwal, as modified by Nelson and Li, discloses change the first image to have three color channels or one color channel {Taliwal [0089][0110]; first image pairs are transformed to grayscale image [0110]}.
Referring to claim 6:
Taliwal, as modified by Nelson and Li, discloses change an array of the first image based on a form of input associated with the neural network model {Taliwal [0078] [0088][0089]; where RGB values are corrected of the first image, thereby changing the RGB array described in [0078]}.
Referring to claim 7:
Taliwal, as modified by Nelson and Li, discloses determine, based on the neural network model, a number of vehicle parts in the second image {Taliwal [0160]; labeled images are input to a CNN and the damaged parts list as the desired output [0160] where a number of parts are determined in the image using the trained neural network model}; and
determine, based on the number of vehicle parts, types of the vehicle parts {Taliwal [0139][0144]; Another desired output is the determination of the loss type, namely, total loss, medium loss, or small loss, for example [0139] and some embodiments can classify a claim into categories as a total, medium, or small loss claim by taking the damaged parts list, repair cost estimation, and current age and monetary value of the vehicle as input to a classifier whose output is the loss type [0144]}.
Referring to claim 8:
Taliwal, as modified by Nelson and Li, discloses determine at least one of a compatible vehicle model of the recognized part or a color of the recognized part {Taliwal [0139]; The database also stored auto claim images and other pieces of information that come with a claim, such as vehicle make, model, color, age, and current market value, for example [0139]}.
Referring to claim 9:
Taliwal, as modified by Nelson and Li, discloses wherein: the neural network model comprises a U-net model {Taliwal [0121]; In one embodiment, a Convolutional Neural Network (CNN) is trained to learn this correlation [0121] where a u-net model is also known as a convolutional neural network}.
Referring to claims 11, 12, and 14-19:
Claims 11, 12, and 14-19 are rejected on a similar basis to claims 1, 2, and 4-9.
Referring to claim 21:
Taliwal, as modified by Nelson and Li, discloses providing, as additional input to the trained neural network model, the accident data, and wherein the trained neural network model is updated further based on the accident data {Nelson 10:52-11:18; In some embodiments, the parts prediction model 30 is re-trained or updated to account for additional data (e.g., additional image attribute data and other data related to repairing damaged vehicles) that has been added to the historical data store 32 [11:5-18]}.
Referring to claim 22:
Claim 22 is rejected on a similar basis to claim 21.
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Taliwal et al. (US 20170293894), in view of Nelson et al. (US 10949814), in view of Li et al. (US 20180260793), and further in view of Townsend et al. (US 20190273837).
Referring to claim 3:
Taliwal, as modified by Nelson and Li, discloses using images of a damaged vehicle to detect damaged parts of the vehicle (abstract). Taliwal, as modified by Nelson and Li, does not disclose crop by adjusting dimensions of the first image so that a ration of a width of the first image to a height of the first image is of 4 to 3.
However, Townsend discloses a related system for annotating video data (abstract). Townsend discloses crop by adjusting dimensions of the first image so that a ratio of a width of the first image to a height of the first image is of 4 to 3 {Townsend [0039][0065][0066]; the cropped image 12 may have a resolution of 1920 pixels by 1080 pixels (e.g., aspect ratio of 16:9), a resolution of 1140 pixels by 1080 pixels (e.g., aspect ratio of 4:3) or the like [0066]}.
It would have been obvious for a person of ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to modify the system disclosed in Taliwal, Nelson, and Li to incorporate cropped images as taught by Townsend because this would provide a manner for varying the aspect ratio of a cropped image based on user preferences (Townsend [0066]), thus aiding the user by cropping the image as desired.
Referring to claim 13:
Claim 13 is rejected on a similar basis to claim 3.
Response to Arguments
Rejections under 35 USC 101
Applicant first argues that the claims cannot be considered under the mental process grouping. Remarks 7. Examiner notes that the claims are not rejected under this grouping.
Applicant then argues that the claims “do not recite an abstract idea of mental processes as discussed above and do not require further eligibility analysis even assuming arguendo that some of the present claims might involve an exception.” Remarks 9. The Applicant argues that the claims are similar to USPTO Example 39, in that they both involve machine learning models trained to perform steps “and thus are subject matter eligible for similar reasons.” Remarks 9. Examiner respectfully disagrees that the claims are similar to Example 39. Example 39 is directed to training a neural network for facial detection in digital images. It is not directed to and does not recite any of the judicial exceptions. In contrast, the instant claims are directed to certain methods of organizing human activity, as is explained above. The claims do include machine learning models trained to perform steps, but they are directed to certain methods of organizing human activity (as above).
Step 2A(1): The Claims are Directed to an Abstract Idea
Applicant again argues that the claims cannot practically be performed by the human mind. Remarks 10. Examiner notes that the claims are not rejected under the mental process grouping.
Step 2A(2): The Claims Do Not Integrate the Abstract Idea into a Practical Application
Applicant argues that the claims, like McRO, do not merely use a computer as a tool to perform an existing process, but the claims “improve the functioning of the device associated with the specific neural network model using specific processes that are not usual for generic computers.” Remarks 12. Examiner respectfully disagrees. Applicant has not shown how the claims improve the functioning of the device. MPEP 2106.05(a) states,
If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art.
In this case, there is no technical explanation as to how to implement the invention in the specification, nor does the disclosure provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Rather, the specification sets forth the alleged “improvement” in a conclusory manner, using terms of art which are already established. In such a case, the Examiner must conclude that the claim does not improve technology. The Specification does not provide a discussion that identifies a technical problem and explains the details of an unconventional technical solution which is expressed in the claims and does not identify technical improvements realized by the claims over the prior art.
Step 2B: The Claims Do Not Add Significantly More Than the Abstract Idea
Applicant argues that the claims, when considered in combination, provide significantly more than the abstract idea. Examiner respectfully disagrees, as this is mere attorney argument, without supporting rationale. For all of the reasons above, the claims do not amount to significantly more than the abstract idea when considered either individually or in combination.
Dependent Claims
The dependent claims have been considered as required to present a prima facie case.
Rejections under 35 USC 103
Examiner has provided new rejections with new art, responsive to the newly amended claim limitations (see above).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARRIE S GILKEY whose telephone number is (571)270-7119. The examiner can normally be reached Monday-Thursday 7:30-4:30 CT and Friday 7:30-12 CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CARRIE S GILKEY/Primary Examiner, Art Unit 3626