DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is a final office action in response to the amendment filed 29 October 2025. Claims 1, 8, 15, and 17-18 have been amended. Claims 1-20 are pending and have been examined.
Response to Amendment
Applicant’s amendment to claims 1, 8, 15, and 17-18 has been entered.
Applicant’s amendment is sufficient to overcome the pending 35 U.S.C. 112(a) rejection. The rejection is respectfully withdrawn.
Applicant’s amendment is insufficient to overcome the pending 35 U.S.C. 101 rejection. The rejection remains pending and is updated below, as necessitated by amendment.
Applicant’s amendment is insufficient to overcome the pending 35 U.S.C. 103 rejection. The rejection remains pending and is updated below, as necessitated by amendment.
Response to Arguments
Applicant’s arguments regarding the 35 U.S.C. 103 rejection have been fully considered, but are not persuasive. Applicant asserts that Chen fails to teach or otherwise disclose a first neural network machine classifier trained to identify rail car components from the plurality of images, and a second neural network machine classifier trained to determine damage severity levels for the rail car components identified by the first neural network machine classifier. Examiner respectfully disagrees. Chen et al. discloses an image processing training routine and training routines to create correlation filters and the neural networks needed for the image processing routine for the particular object to be analyzed. Chen explicitly discloses the use of a set of trained neural networks to perform the damage assessment. See at least Chen et al. [col. 7, lines 15-60]. Further Chen et al. [col. 5, lines 8-45] discloses that the server may store one or more image processing training routines that use the base object models, changed image files, and image information files to detect damages. See also, Chen et al. [col. 20, lines 25-50]. Therefore Chen et al. discloses the amended limitation.
Applicant’s arguments regarding the pending 35 U.S.C. 101 rejection have been fully considered, but are not persuasive. Applicant asserts that the claim limitations are not directed to an abstract idea because the human mind is not equipped to perform the claim limitations; that under Step 2A, Prong Two when considered as a whole, claim 1 integrates any alleged judicial exception into a practical application by applying machine learning techniques and interactive interface functionality that meaningfully limits the judicial exception; and under Step 2B the additional elements go beyond generic computer element implementation of the alleged abstract idea, and amount to significantly more. Examiner respectfully disagrees.
While the amended claims include limitations for training a neural network machine classifier to identify rail car components from images and to determine damage severity, training a learning model constitutes a mathematical concept, such as the concept of using known data to set and adjust coefficients and mathematical relationships of variables that represent some modeled characteristic or phenomenon. The MPEP expressly recognizes mathematical concepts including mathematical relationships as constituting an abstract idea. MPEP § 2106.04(a). The recitation of a trained neural network machine classifier does not negate the mental nature of these limitations because the claim here merely uses the trained neural network as a tool to perform the otherwise mental process of classifying image data and making a damage severity determination. See MPEP 2106.04(a)(2), subsection III.C. The amended claims therefore recite a mental process. The claims are analogous to ineligible Claim 2 of Example 47, where under Step 2A, Prong One, it recites abstract ideas including, for example, mental concepts (e.g., rounding data values) that can be performed in the human mind, as well as mathematical concepts (e.g., a backpropagation algorithm and a gradient descent algorithm for training of the ANN). Further, even when considered as a whole under Step 2A, Prong Two, claim 2 of Example 47 fails to include a “practical application” because it recites generic computer hardware that simply recites the abstract ideas with the words “apply it” (or an equivalent), that amount to nothing more than mere instructions to implement an abstract idea on a computer without placing any limits on how such steps are performed. For example, even though Claim 2 includes AI-related elements such as “detecting one or more anomalies in a data set using the trained ANN” and “using a trained ANN,” such elements merely recite the outcome and fail to describe any details about how the elements are accomplished. Finally, Claim 2 is further not eligible under Step 2B because the Claim 2 elements amount to nothing more than well-understood, routine, and conventional activity in the field of computing. As a result, the claims herein, like Claim 2 of Example 47, are ineligible under 35 U.S.C. 101. The 35 U.S.C. 101 rejection is proper and maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of collecting data, analyzing it, and presenting certain results of the collection and analysis, without significantly more. Independent claim 1 is directed to a process, independent claim 8 is directed to a system, and independent claim 15 is directed to a product for task distribution and tracking.
Independent claim 1 recites at least the following limitations:
training, using a plurality of images of components of rail cars, one or more indications of features within the plurality of images, and one or more labels indicating a severity associated with the one or more indications of features;
a first neural network machine classifier trained to identify rail car components from the plurality of images, and a second neural network machine classifier trained to determine damage severity levels for the rail car components identified by the first neural network machine classifier;
presenting, using a user interface associated with a task tracking server system, an upload dialogue for damage data;
receiving, via the user interface, the damage data indicating damage to a component of a rail car;
determining, based on the second neural network machine classifier determining that a threshold likelihood of determining damage from the damage data is not satisfied, to capture additional damage data;
determining, based on the additional damage data, data indicating the damage and the component, wherein the data indicates a type of repair to the component to correct the indicated damage;
providing an interactive user interface comprising search option elements listed in a user- selectable drop down; and
providing, in response to a user selection of a search option element listed in the user- selectable drop down, and based on the type of repair, a plurality of tasks for a repair of the rail car.
Independent claim 8, recites at least the following limitations:
train, using damage data associated with rail cars, one or more indications of features, and one or more labels associated with the one or more indications of features;
a first neural network machine classifier trained to identify rail car components from damage data, and a second neural network machine classifier trained to determine damage severity levels for the rail car components identified by the first neural network machine classifier;
present a data interface comprising an upload portal for damage data;
determine a service center that is proximate with a current geographic location of a rail car;
determine, based on the second neural network machine classifier determining that a threshold likelihood of determining damage from the damage data is not satisfied, to capture additional damage data;
determine the damage data indicates a type of repair to a component to correct indicated damage;
provide a data interface; and
provide, in response to the data interface, and based on the type of repair, a plurality of tasks for a repair of the rail car.
Independent claim 15 recites at least the following limitations:
training, using severity data associated with components of rail cars, one or more indications of features and one or more labels indicating a severity associated with the one or more indications of features;
a first neural network machine classifier trained to identify rail car components from the severity data, and a second neural network machine classifier trained to determine damage severity levels for the rail car components identified by the first neural network machine classifier;
determining, based on the second neural network machine classifier determining that a threshold likelihood of determining damage is not satisfied, to capture additional data;
presenting, using a user interface, an upload dialogue for the additional data;
determining data associated with a component;
providing an interactive user interface comprising search option elements listed in a user- selectable drop down; and
providing, in response to a user selection of a search option element listed in the user- selectable drop down, and based on a type of repair, a plurality of tasks for the component.
Under Step 1, independent claims 1, 8, and 15 recite at least one step or act, including training one or more indications of features. Thus the claims fall within one of the statutory categories of invention.
Under Step 2A Prong One, the limitations for training one or more indications of features within the plurality of images and one or more labels indicating a severity associated with the one or more indications of features, presenting an upload dialogue for damage data, receiving damage data indicating damage to a component, determining to capture additional data, determining data indicating the damage and the component, providing an interactive user interface, provide a data interface, providing a plurality of tasks for a repair of the rail car, determine a service center that is proximate with a current geographic location of a rail car, determining a status, and determining data associated with a component, as drafted, illustrates a process that, under its broadest reasonable interpretation covers performance of the limitation in the mind (observations, evaluations, judgments, and opinions), but for the recitation of generic computer components (a server, processor, memory, interface). That is other than reciting that the recited method, system, or product includes a processor and memories to perform the steps, nothing in the claim elements precludes the steps from practically being performed in the human mind, or by a human using a pen and paper. An insurance adjuster or repair technician could mentally and manually perform the recited steps for detecting and identifying damage to a component and detailing steps to repair the damage. Therefore, the limitations fall into the mental processes grouping and accordingly the claims recite an abstract idea.
The limitations for receiving and capturing damage data are recited broadly and amount to data gathering steps because the data obtained is merely used as input for the recited data processing steps, and are considered insignificant extra-solution activity (see MPEP 2106.05(g)).
Under Step 2A Prong Two, the judicial exception is not integrated into a practical application. In particular, the claims only recite a processor and storage device for performing the recited steps. These elements are recited at a high level of generality (i.e., as a generic processor performing a generic computer function) and amount to no more than mere instructions to apply the exception using generic computer components. See MPEP 2106.05(f). For example, Applicant’s specification at paragraph [0041] states: “… the systems and methods described herein can be performed utilizing both general-purpose computing hardware and by single-purpose devices.” Adding generic computer components to perform generic functions, such as data gathering, performing calculations, and outputting a result would not transform the claim into eligible subject matter. See MPEP 2106.05(d). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The presence of a machine learning algorithm or computer implementations do not necessarily restrict the claim from reciting an abstract idea. While the claim limitations include limitations for training … a first neural network machine classifier and a second neural network machine classifier, the claims fail to go beyond processing data characteristics to make a determination and merely “apply” the neural network machine learning classifier to classify or label the obtained damage data. The machine learning algorithm and the computer limitation simply process the data through inputting and outputting data.
While the amended claims include limitations for training a neural network machine classifier to identify rail car components from images and to determine damage severity, training a learning model constitutes a mathematical concept, such as the concept of using known data to set and adjust coefficients and mathematical relationships of variables that represent some modeled characteristic or phenomenon. The MPEP expressly recognizes mathematical concepts including mathematical relationships as constituting an abstract idea. MPEP § 2106.04(a). The recitation of a trained neural network machine classifier does not negate the mental nature of these limitations because the claim here merely uses the trained neural network as a tool to perform the otherwise mental process of classifying image data and making a damage severity determination. See MPEP 2106.04(a)(2), subsection III.C. The amended claims therefore recite a mental process. The claims are analogous to ineligible Claim 2 of Example 47, where under Step 2A, Prong One, it recites abstract ideas including, for example, mental concepts (e.g., rounding data values) that can be performed in the human mind, as well as mathematical concepts (e.g., a backpropagation algorithm and a gradient descent algorithm for training of the ANN). Further, even when considered as a whole under Step 2A, Prong Two, claim 2 of Example 47 fails to include a “practical application” because it recites generic computer hardware that simply recites the abstract ideas with the words “apply it” (or an equivalent), that amount to nothing more than mere instructions to implement an abstract idea on a computer without placing any limits on how such steps are performed. For example, even though Claim 2 includes AI-related elements such as “detecting one or more anomalies in a data set using the trained ANN” and “using a trained ANN,” such elements merely recite the outcome and fail to describe any details about how the elements are accomplished. Finally, Claim 2 is further not eligible under Step 2B because the Claim 2 elements amount to nothing more than well-understood, routine, and conventional activity in the field of computing.
Examiner further directs Applicant to Example 48 of the Patent Subject Matter Eligibility Guidance, wherein claim 1 was deemed ineligible and the recited deep neural network (DNN) was insufficient to confer patent eligibility because the claim did not include any details regarding how the DNN operated. Claim 1 was determined to fall within the mathematical concepts grouping of abstract ideas, and the claimed DNN was determined to be generically claimed and merely used as a tool for implementing the recited mathematical calculation rather than improving the technology or a computer. The additional elements claimed herein are analogous to Example 48, claim 1.
The claim limitations additionally recite “providing an interactive user interface comprising search option elements listed in a user- selectable drop down; and providing, in response to a user selection of a search option element listed in the user- selectable drop down, and based on the type of repair, a plurality of tasks for a repair of the rail car.” User interaction with an interface to receive, transmit, display, or otherwise input or filter data is insufficient to confer patent subject matter eligibility. User interaction with a user-selectable drop down is merely generic use of interface technology to receive user input used for data processing, without significantly more. That the interface includes search option elements performing filtering functions used to process data for display or human decision making, is also insufficient to confer subject matter eligibility because the user interaction with the interface does not improve the functioning of the interface or improve graphical user interface technology. The interface related limitations recited herein are analogous to those of Claim 2 of Example 40 wherein the claims are directed to mere data gathering steps that automate the comparison of data without significantly more than the recited insignificant extra solution activity and mere instructions to apply the exception using generic computer components.
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of a processor and storage device amounts to no more than mere instructions to apply the exception using a generic computer component which cannot provide an inventive concept.
Dependent claims 2 through 7, 9 through 14, and 16 through 20 include the abstract ideas of the independent claims. The dependent claims merely narrow the mental process by describing the type pf data used to provide the plurality of tasks and how the data is manipulated to generate the output of the data processing steps. The limitations of the dependent claims are not integrated into a practical application because no additional elements set forth any limitations that meaningfully limit the abstract idea implementation, therefore the claims are directed to an abstract idea. There are no additional elements that transform the claim into a patent eligible idea by amounting to significantly more. The analysis above applies to all statutory categories of invention. Therefore claims 1- 20 are ineligible under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1-7 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 10,740,891) in view of McQuown et al. (US 2005/0144183).
Regarding Amended Claim 1, Chen et al. discloses a method, comprising: training, using a plurality of images of components of rail cars, one or more indications of features within the plurality of images, and one or more labels indicating a severity associated with the one or more indications of features; a first neural network machine classifier trained to identify rail car components from the plurality of images, and a second neural network machine classifier trained to determine damage severity levels for the rail car components identified by the first neural network machine classifier; (… a computer-implemented method in a server device of analyzing images. Chen et al. [col. 2, lines 5-50]. … an exemplary image processing system that can be used to detect changes in objects, such as to detect damage to automobiles, buildings, and the like, and to use these detected changes to determine secondary features. Chen et al. [col. 2, lines 51-67; Fig. 1]. … The server 112, which may include a microprocessor 128 and a computer readable memory 129, may store one or more image processing training routines 130. … one or more of the training routines 130 may determine a set of correlation filters 132 (also referred to as difference filters) for each of the target objects, and one or more of the training routines 130 may determine a set of convolutional neural networks (CNNs) 134 for the objects represented by each of the base object models 120. … (such as to detect damage to automobiles or other vehicles). Chen et al. [col. 5, lines 7-45; col. 7, lines 15-60 (training routines and CNNs); col. 20, lines 35-40 (neural network classifiers)]. … to detect or classify damage or changes to particular target object components … an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]);
presenting, using a user interface associated with a task tracking server system, an upload dialogue for damage data; receiving, via the user interface, the damage data indicating damage to a component of a rail car; (During operation, a user may log onto or access the system 100 via one of the user interfaces 102, may upload or store a new set of images 142 of a target object in the database 109, and may additionally provide or store information in the database 109 related to the new set of images 142, such as an identification of the target object within the new set of images. Chen et al. [col. 5, lines 65-67; col. 6, lines 1-51]);
determining, based on the second neural network machine classifier determining that a threshold likelihood of determining damage from the damage data is not satisfied, to capture additional damage data; ( … the server may determine the need for additional images depicting the target vehicle or additional information associated with the target vehicle. In particular, the server may determine whether any additional perspective(s) is needed, whether any components are not depicted in the portion of the set of images, whether the general image quality is too low, and/or whether the portion of the set of images is deficient in another regard(s). … the threshold criteria may account some combination of a sufficient amount and type of image perspective(s), a sufficient amount and type of identified vehicle components, a sufficient image quality, and/or other characteristics. … the damage that may be depicted in any additionally-captured images may confirm (or not confirm) the damage indicated by the telematics data. Chen et al. [col. 39, lines 44-67; col. 40, lines 1-67; col. 41, lines 1-23; Fig. 18A-18B]);
determining, based on the additional damage data, data indicating the damage and the component, wherein the data indicates a type of repair to the component to correct the indicated damage; (… to detect or classify damage or changes to particular target object components … an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]. … block 320 may use the damage determination to determine each of the parts of the target vehicle that needs to be replaced, each of the parts that need to be repaired (and potentially the type of repair), etc. The block 320 may identify particular types of work that need to be performed with respect to various ones of the body panels, such as repainting, replacing, filling creases or gaps, etc. Chen et al. [col. 29, lines 1-67; col. 30, lines 1-2; Fig. 3]);
While Chen et al. discloses providing a user interface (see at least Chen et al. [col. 3, lines 45-50; Fig. 19A-19D]), Chen et al. fails to explicitly disclose providing an interactive user interface comprising search option elements listed in a user- selectable drop down; and providing, in response to a user selection of a search option element listed in the user- selectable drop down, and based on the type of repair, a plurality of tasks for a repair of the rail car. McQuown et al. discloses these limitations. (… The locomotive 12, such as may be parked at a railroad service yard 13, may be serviced by a technician or other service personnel holding a portable unit 14. … Special software tools related to the repair task are also available at the portable unit 14, as transmitted from the diagnostic service center 20. McQuown et al. [para. 0025-0030, 0038-0039 (rail car inspection)]. … The portable unit 14 includes a graphical user interface. McQuown et al. [para. 0056, 0067-0068]. … The technical documentation subsystem 186 maintains the technical documentation repository and supports the selection and retrieval of technical documentation into a repair specific set of relevant documents by the repair expert. McQuown et al. [para. 0074-0076]. … The technician can navigate or search through the technical documentation by using wizard applications or visual drill downs. McQuown et al. [para. 0087]. … The home page file includes a list of the currently active recommendations. … A technician selects a specific recommendation from the home page file. McQuown et al. [para. 0084]. … The information displayed on the portable unit 14 directs the step-by-step activities of the technician through the repair process including providing documentation and information from the various databases and modules discussed in conjunction with FIG. 2. McQuown et al. [para. 0090-0092; Fig. 10]). It would have been obvious to one of ordinary skill in the art of information search and retrieval and damage appraisal and repair management before the effective filing date of the claimed invention to modify the data processing steps of Chen et al. to include providing an interactive user interface comprising search option elements listed in a user- selectable drop down; and providing, in response to a user selection of a search option element listed in the user- selectable drop down, and based on the type of repair, a plurality of tasks for a repair of the rail car as disclosed by McQuown et al. to allow easy and seamless integration of the repair recommendation with the railroad's work order system (McQuown et al. [para. 0028]), in a manner that yields predictable results. Examiner notes that the claim provides search option elements listed in a user-selectable dropdown element, and McQuown et al. provides a list of user selectable search options in a portable interface. The substitution of the selectable list in the portable interface of McQuown et al. and the selectable drop down in the claim would have been obvious because the substitution of one known element for another would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention because the selectable list of McQuown et al. performs the same function as the claimed selectable drop down.
Regarding Claim 2, Chen et al. and McQuown et al. combined disclose the method, wherein the damage data and the data are in a standardized format. McQuown et al. discloses this limitation. (By using a web format (or other standardized format) the information can be displayed on the portable unit 14 in a standard format with which the technician will eventually become familiar. McQuown et al. [para. 0064]). It would have been obvious to one of ordinary skill in the art of information search and retrieval and damage appraisal and repair management before the effective filing date of the claimed invention to modify the data processing steps of Chen et al. to include the damage data and the data are in a standardized format as disclosed by McQuown et al. to allow easy and seamless integration of the repair recommendation with the railroad's work order system (McQuown et al. [para. 0028]), in a manner that yields predictable results.
Regarding Claim 3, Chen et al. and McQuown et al. combined disclose the method, further comprising: determining severity data based on the damage data, wherein the severity data indicates a severity of the damage to the component; and generating the data based on the severity data. (… an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]. … block 320 may use the damage determination to determine each of the parts of the target vehicle that needs to be replaced, each of the parts that need to be repaired (and potentially the type of repair), etc. The block 320 may identify particular types of work that need to be performed with respect to various ones of the body panels, such as repainting, replacing, filling creases or gaps, etc. Chen et al. [col. 29, lines 1-67; col. 30, lines 1-2; Fig. 3]).
Regarding Claim 4, Chen et al. and McQuown et al. combined disclose the method, wherein: the damage data comprises image data; and wherein the data is determined by processing the image data using the second neural network machine classifier to generate an indication of a type of damage represented in the image data and a probabilistic likelihood that the damage to the component corresponds to the indicated type of damage. (The server 112, which may include a microprocessor 128 and a computer readable memory 129, may store one or more image processing training routines 130. … one or more of the training routines 130 may determine a set of correlation filters 132 (also referred to as difference filters) for each of the target objects, and one or more of the training routines 130 may determine a set of convolutional neural networks (CNNs) 134 for the objects represented by each of the base object models 120. … (such as to detect damage to automobiles or other vehicles). Chen et al. [col. 5, lines 7-45; col. 20, lines 35-40]. … the CNNs 134 or other deep learning tools may provide other possible outputs including, for example, … a confidence level with respect to prediction accuracy, etc. Chen et al. [col. 27, lines 10-45; col. 28, lines 1-45 (heatmap color indicating likelihood of damage of each component)]).
Regarding Claim 5, Chen et al. and McQuown et al. combined disclose the method, further comprising: determining that the probabilistic likelihood is below a threshold value; and based on the probabilistic likelihood being below the threshold value, obtaining additional damage data for the component. ( … the server may determine the need for additional images depicting the target vehicle or additional information associated with the target vehicle. In particular, the server may determine whether any additional perspective(s) is needed, whether any components are not depicted in the portion of the set of images, whether the general image quality is too low, and/or whether the portion of the set of images is deficient in another regard(s). … the threshold criteria may account some combination of a sufficient amount and type of image perspective(s), a sufficient amount and type of identified vehicle components, a sufficient image quality, and/or other characteristics. … the damage that may be depicted in any additionally-captured images may confirm (or not confirm) the damage indicated by the telematics data. Chen et al. [col. 39, lines 44-67; col. 40, lines 1-67; col. 41, lines 1-23; Fig. 18A-18B]).
Regarding Claim 6, Chen et al. and McQuown et al. combined disclose the method, further comprising: determining a detail source corresponding to the damage data; and obtaining the damage data from a third-party database based on the detail source. (… photos may be collected by, for example, owners of the automobiles depicted in the photos, an automobile insurer against whom an insurance claim was made for repairing or replacing the damaged automobile, etc. Chen et al. [col. 4, lines 13-67]. … The image collector block 202 may, as indicated in FIG. 2, receive raw images of the target object (also referred to interchangeably herein as “source images”) on which change detection is to occur, e.g., from a user, a database, or any other source. Such images may include still image or video images, and/or may include two-dimensional and/or three-dimensional images. Chen et al. [col. 8, lines 28-50]. … FIG. 4 illustrates a point cloud object model 402 of a vehicle, which in this case is a Toyota Camry® of a particular year, that may be used as a base object model for damaged or target vehicle images depicting a damaged Toyota Camry of the same year. The point cloud object model 402 may be stored in, for example, the vehicle image database 108 of FIG. 1, and may be obtained from, for example, the automobile manufacturer, or from any other source. Chen et al. [col. 16, lines 14-45]).
Regarding Claim 7, Chen et al. and McQuown et al. combined disclose the method, further comprising determining ownership data from the third-party database based on the detail source. (… the block 202 may receive indications of where damage occurred to the automobile or other object as inputs from a user, such as the owner of the automobile, from insurance claims information, etc. In one example, the block 202 may receive first notice of loss (FNOL) information, which is generally information collected by an insurance carrier at the first notice of loss, e.g., when the owner first notifies the carrier of a claim or a potential claim. Chen et al. [col. 9, lines 13-35; Fig. 1-2]).
Regarding Amended Claim 15, Chen et al. discloses a non-transitory machine-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform steps comprising: (… certain embodiments … may constitute either software modules (e.g., code stored on a non-transitory machine-readable medium) or hardware modules. … one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software. Chen et al. [col. 42, lines 48-67]):
training, using severity data associated with components of rail cars, one or more indications of features, and one or more labels indicating a severity associated with the one or more indications of features; a first neural network machine classifier trained to identify rail car components from the severity data, and a second neural network machine classifier trained to determine damage severity levels for the rail car components identified by the first neural network machine classifier; (The server 112, which may include a microprocessor 128 and a computer readable memory 129, may store one or more image processing training routines 130. …… a computer-implemented method in a server device of analyzing images. Chen et al. [col. 2, lines 5-50]. … an exemplary image processing system that can be used to detect changes in objects, such as to detect damage to automobiles, buildings, and the like, and to use these detected changes to determine secondary features. Chen et al. [col. 2, lines 51-67; Fig. 1]. … one or more of the training routines 130 may determine a set of correlation filters 132 (also referred to as difference filters) for each of the target objects, and one or more of the training routines 130 may determine a set of convolutional neural networks (CNNs) 134 for the objects represented by each of the base object models 120. … (such as to detect damage to automobiles or other vehicles). Chen et al. [col. 5, lines 7-45; col. 7, lines 15-60 (training routines and CNNs); col. 20, lines 35-40 (neural network classifiers)]. … to detect or classify damage or changes to particular target object components … an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]);
determining, based on the second neural network machine classifier determining that a threshold likelihood of determining damage is not satisfied, to capture additional data; ( … the server may determine the need for additional images depicting the target vehicle or additional information associated with the target vehicle. In particular, the server may determine whether any additional perspective(s) is needed, whether any components are not depicted in the portion of the set of images, whether the general image quality is too low, and/or whether the portion of the set of images is deficient in another regard(s). … the threshold criteria may account some combination of a sufficient amount and type of image perspective(s), a sufficient amount and type of identified vehicle components, a sufficient image quality, and/or other characteristics. … the damage that may be depicted in any additionally-captured images may confirm (or not confirm) the damage indicated by the telematics data. Chen et al. [col. 39, lines 44-67; col. 40, lines 1-67]. … In response to the individual selecting the transmit selection 1954, the electronic device may display an interface 1955 as depicted in FIG. 19B. The interface 1955 may include a status 1956 indicating that the electronic device is transmitting the captured image(s). After receiving the image(s) from the electronic device, the server may analyze the image(s) to identify the target vehicle depicted in the image(s) and determine whether the quality and/or characteristics of the image(s) at least meet the threshold criteria for the corresponding base image model, as discussed herein. Chen et al. [col. 41, lines 1-23; Fig. 18A-18B]);
presenting, using a user interface, an upload dialogue for the additional data; determining data associated with a component; (During operation, a user may log onto or access the system 100 via one of the user interfaces 102, may upload or store a new set of images 142 of a target object in the database 109, and may additionally provide or store information in the database 109 related to the new set of images 142, such as an identification of the target object within the new set of images. Chen et al. [col. 5, lines 65-67; col. 6, lines 1-51]);
While Chen et al. discloses providing a user interface (see at least Chen et al. [col. 3, lines 45-50; Fig. 19A-19D]), Chen et al. fails to explicitly disclose providing an interactive user interface comprising search option elements listed in a user- selectable drop down; and providing, in response to a user selection of a search option element listed in the user- selectable drop down, and based on a type of repair, a plurality of tasks for the component. McQuown et al. discloses these limitations. (… The locomotive 12, such as may be parked at a railroad service yard 13, may be serviced by a technician or other service personnel holding a portable unit 14. … Special software tools related to the repair task are also available at the portable unit 14, as transmitted from the diagnostic service center 20. McQuown et al. [para. 0025-0030, 0038-0039 (rail car inspection)]. … The portable unit 14 includes a graphical user interface. McQuown et al. [para. 0056, 0067-0068]. … The technical documentation subsystem 186 maintains the technical documentation repository and supports the selection and retrieval of technical documentation into a repair specific set of relevant documents by the repair expert. McQuown et al. [para. 0074-0076]. … The technician can navigate or search through the technical documentation by using wizard applications or visual drill downs. McQuown et al. [para. 0087]. … The home page file includes a list of the currently active recommendations. … A technician selects a specific recommendation from the home page file. McQuown et al. [para. 0084]. … The information displayed on the portable unit 14 directs the step-by-step activities of the technician through the repair process including providing documentation and information from the various databases and modules discussed in conjunction with FIG. 2. McQuown et al. [para. 0090-0092; Fig. 10]). It would have been obvious to one of ordinary skill in the art of information search and retrieval and damage appraisal and repair management before the effective filing date of the claimed invention to modify the data processing steps of Chen et al. to include user interface interaction for data search and retrieval providing an interactive user interface comprising search option elements listed in a user- selectable drop down; and providing, in response to a user selection of a search option element listed in the user- selectable drop down, and based on a type of repair, a plurality of tasks for the component as disclosed by McQuown et al. to allow easy and seamless integration of the repair recommendation with the railroad's work order system (McQuown et al. [para. 0028]), in a manner that yields predictable results. Examiner notes that the claim provides search option elements listed in a user-selectable dropdown element, and McQuown et al. provides a list of user selectable search options in a portable interface. The substitution of the selectable list in the portable interface of McQuown et al. and the selectable drop down in the claim would have been obvious because the substitution of one known element for another would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention because the selectable list of McQuown et al. performs the same function as the claimed selectable drop down.
Regarding Claim 16, Chen et al. and McQuown et al. combined disclose the non-transitory machine-readable medium, wherein the additional data is in a standardized format. McQuown et al. discloses this limitation. (By using a web format (or other standardized format) the information can be displayed on the portable unit 14 in a standard format with which the technician will eventually become familiar. McQuown et al. [para. 0064]). It would have been obvious to one of ordinary skill in the art of information search and retrieval and damage appraisal and repair management before the effective filing date of the claimed invention to modify the data processing steps of Chen et al. to include the damage data and the data are in a standardized format as disclosed by McQuown et al. to allow easy and seamless integration of the repair recommendation with the railroad's work order system (McQuown et al. [para. 0028]), in a manner that yields predictable results.
Regarding Amended Claim 17, Chen et al. and McQuown et al. combined disclose the non-transitory machine-readable medium, wherein the instructions further cause the one or more processors to perform steps comprising: determining severity data based on the additional data, wherein the severity data indicates a severity of the damage; and generating the data based on the severity data. (… an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]. … block 320 may use the damage determination to determine each of the parts of the target vehicle that needs to be replaced, each of the parts that need to be repaired (and potentially the type of repair), etc. The block 320 may identify particular types of work that need to be performed with respect to various ones of the body panels, such as repainting, replacing, filling creases or gaps, etc. Chen et al. [col. 29, lines 1-67; col. 30, lines 1-2; Fig. 3]).
Regarding Amended Claim 18, Chen et al. and McQuown et al. combined disclose the non-transitory machine-readable medium, wherein: the additional data comprises image data; and wherein the data is determined by processing the image data using the second neural network machine classifier to generate an indication of damage represented in the image data and a probabilistic likelihood of the damage. (The server 112, which may include a microprocessor 128 and a computer readable memory 129, may store one or more image processing training routines 130. … one or more of the training routines 130 may determine a set of correlation filters 132 (also referred to as difference filters) for each of the target objects, and one or more of the training routines 130 may determine a set of convolutional neural networks (CNNs) 134 for the objects represented by each of the base object models 120. … (such as to detect damage to automobiles or other vehicles). Chen et al. [col. 5, lines 7-45; col. 20, lines 35-40]. … the CNNs 134 or other deep learning tools may provide other possible outputs including, for example, … a confidence level with respect to prediction accuracy, etc. Chen et al. [col. 27, lines 10-45; col. 28, lines 1-45 (heatmap color indicating likelihood of damage of each component)]).
Regarding Claim 19, Chen et al. and McQuown et al. combined disclose the non-transitory machine-readable medium, wherein the instructions further cause the one or more processors to perform steps comprising: determining that the probabilistic likelihood is below a threshold value; and based on the probabilistic likelihood being below the threshold value, obtaining the additional data. ( … the server may determine the need for additional images depicting the target vehicle or additional information associated with the target vehicle. In particular, the server may determine whether any additional perspective(s) is needed, whether any components are not depicted in the portion of the set of images, whether the general image quality is too low, and/or whether the portion of the set of images is deficient in another regard(s). … the threshold criteria may account some combination of a sufficient amount and type of image perspective(s), a sufficient amount and type of identified vehicle components, a sufficient image quality, and/or other characteristics. … the damage that may be depicted in any additionally-captured images may confirm (or not confirm) the damage indicated by the telematics data. Chen et al. [col. 39, lines 44-67; col. 40, lines 1-67; col. 41, lines 1-23; Fig. 18A-18B]).
Regarding Claim 20, Chen et al. and McQuown et al. combined disclose the non-transitory machine-readable medium, wherein the instructions further cause the one or more processors to perform steps comprising: determining a detail source corresponding to the additional data; and obtaining the additional data from a third-party database based on the detail source. (… photos may be collected by, for example, owners of the automobiles depicted in the photos, an automobile insurer against whom an insurance claim was made for repairing or replacing the damaged automobile, etc. Chen et al. [col. 4, lines 13-67]. … The image collector block 202 may, as indicated in FIG. 2, receive raw images of the target object (also referred to interchangeably herein as “source images”) on which change detection is to occur, e.g., from a user, a database, or any other source. Such images may include still image or video images, and/or may include two-dimensional and/or three-dimensional images. Chen et al. [col. 8, lines 28-50]. … FIG. 4 illustrates a point cloud object model 402 of a vehicle, which in this case is a Toyota Camry® of a particular year, that may be used as a base object model for damaged or target vehicle images depicting a damaged Toyota Camry of the same year. The point cloud object model 402 may be stored in, for example, the vehicle image database 108 of FIG. 1, and may be obtained from, for example, the automobile manufacturer, or from any other source. Chen et al. [col. 16, lines 14-45]).
Claims 8-14 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 10,740,891) in view of McQuown et al. (US 2005/0144183), in further view of Roddy et al. (US 2003/0055666).
Regarding Amended Claim 8, a task tracking server system, comprising: at least one processor; and memory storing instructions that, when read by the at least one processor, cause the task tracking server system to: (… a system for analyzing images is provided … The processor may be configured to execute a set of instructions to cause the processor to receive a set of images from an electronic device via the communication module, analyze at least a portion of the set of images. Chen et al. [col. 2, lines 24-50]. … as illustrated in FIG. 1, the server 114, may include a microprocessor 138 and a memory 139 that stores an image processing routine 140 that may perform image processing on a set of target images. Chen et al. [col. 5, lines 45-65]):
train, using damage data associated with rail cars, one or more indications of features, and one or more labels associated with the one or more indications of features; a first neural network machine classifier trained to identify rail car components from damage data, and a second neural network machine classifier trained to determine damage severity levels for the rail car components identified by the first neural network machine classifier; (… a computer-implemented method in a server device of analyzing images. Chen et al. [col. 2, lines 5-50]. … an exemplary image processing system that can be used to detect changes in objects, such as to detect damage to automobiles, buildings, and the like, and to use these detected changes to determine secondary features. Chen et al. [col. 2, lines 51-67; Fig. 1]. … The server 112, which may include a microprocessor 128 and a computer readable memory 129, may store one or more image processing training routines 130. … one or more of the training routines 130 may determine a set of correlation filters 132 (also referred to as difference filters) for each of the target objects, and one or more of the training routines 130 may determine a set of convolutional neural networks (CNNs) 134 for the objects represented by each of the base object models 120. … (such as to detect damage to automobiles or other vehicles). Chen et al. [col. 5, lines 7-45; col. 7, lines 15-60 (training routines and CNNs); col. 20, lines 35-40 (neural network classifiers)]. … to detect or classify damage or changes to particular target object components … an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]);
present a data interface comprising an upload portal for damage data; (During operation, a user may log onto or access the system 100 via one of the user interfaces 102, may upload or store a new set of images 142 of a target object in the database 109, and may additionally provide or store information in the database 109 related to the new set of images 142, such as an identification of the target object within the new set of images. Chen et al. [col. 5, lines 65-67; col. 6, lines 1-51]);
Chen et al. fails to explicitly disclose the step to determine a service center that is proximate with a current geographic location of a rail car. Roddy et al. discloses this limitation. (… the invention includes the aspects of real-time data collection from each of the mobile assets... The planning of maintenance activities may include the selection of an optimal time and/or location for performing the work. Roddy et al. [para. 0007]. … the data center or service personnel may evaluate the most logical repair location in terms of various criteria, such as train proximity, parts, repair equipment availability, manpower availability, etc. The service recommendation automatically triggers the creation of an electronic work order 172 within a service shop management system. Roddy et al. [para. 0083- 0087]). It would have been obvious for one of ordinary skill in the art of maintenance task analysis before the effective filing date of the claimed invention to modify the task tracking system of Chen et al. to include a step to determine a service center that is proximate with a current geographic location of a rail car as taught by Roddy et al. in order to effectively manage a fleet of mobile assets (Roddy et al. [para. 0025]), in a manner that would yield predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention.
determine, based on the second neural network machine classifier determining that a threshold likelihood of determining damage from the damage data is not satisfied, to capture additional damage data; ( … the server may determine the need for additional images depicting the target vehicle or additional information associated with the target vehicle. In particular, the server may determine whether any additional perspective(s) is needed, whether any components are not depicted in the portion of the set of images, whether the general image quality is too low, and/or whether the portion of the set of images is deficient in another regard(s). … the threshold criteria may account some combination of a sufficient amount and type of image perspective(s), a sufficient amount and type of identified vehicle components, a sufficient image quality, and/or other characteristics. … the damage that may be depicted in any additionally-captured images may confirm (or not confirm) the damage indicated by the telematics data. Chen et al. [col. 39, lines 44-67; col. 40, lines 1-67; col. 41, lines 1-23; Fig. 18A-18B]);
determine the damage data indicates a type of repair to a component to correct indicated damage; (… to detect or classify damage or changes to particular target object components … an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]. … block 320 may use the damage determination to determine each of the parts of the target vehicle that needs to be replaced, each of the parts that need to be repaired (and potentially the type of repair), etc. The block 320 may identify particular types of work that need to be performed with respect to various ones of the body panels, such as repainting, replacing, filling creases or gaps, etc. Chen et al. [col. 29, lines 1-67; col. 30, lines 1-2; Fig. 3]);
provide a data interface; (FIGS. 19A-19D depict exemplary user interfaces associated with capturing images of a target vehicle. Chen et al. [col. 3, lines 45-50; Fig. 19A-19D])
Chen et al. fails to explicitly disclose the step to provide, in response to the data interface, and based on the type of repair, a plurality of tasks for a repair of the rail car. McQuown et al. discloses this limitations. (… The locomotive 12, such as may be parked at a railroad service yard 13, may be serviced by a technician or other service personnel holding a portable unit 14. … Special software tools related to the repair task are also available at the portable unit 14, as transmitted from the diagnostic service center 20. McQuown et al. [para. 0025-0030, 0038-0039 (rail car inspection)]. … The portable unit 14 includes a graphical user interface. McQuown et al. [para. 0056, 0067-0068]. … The technical documentation subsystem 186 maintains the technical documentation repository and supports the selection and retrieval of technical documentation into a repair specific set of relevant documents by the repair expert. McQuown et al. [para. 0074-0076, 0084, 0087]. … The information displayed on the portable unit 14 directs the step-by-step activities of the technician through the repair process including providing documentation and information from the various databases and modules discussed in conjunction with FIG. 2. McQuown et al. [para. 0090-0092; Fig. 10]). It would have been obvious to one of ordinary skill in the art of information search and retrieval and damage appraisal and repair management before the effective filing date of the claimed invention to modify the data processing steps of Chen et al. and Roddy et al. combined to include the step to provide, in response to the data interface, and based on the type of repair, a plurality of tasks for a repair of the rail car as disclosed by McQuown et al. to allow easy and seamless integration of the repair recommendation with the railroad's work order system (McQuown et al. [para. 0028]), in a manner that yields predictable results.
Regarding Claim 9, Chen et al., Roddy et al., and McQuown et al. combined disclose the task tracking server system, wherein the damage data is in a standardized format. McQuown et al. discloses this limitation. (By using a web format (or other standardized format) the information can be displayed on the portable unit 14 in a standard format with which the technician will eventually become familiar. McQuown et al. [para. 0064]). It would have been obvious to one of ordinary skill in the art of information search and retrieval and damage appraisal and repair management before the effective filing date of the claimed invention to modify the data processing steps of Chen et al. and Roddy et al. combined to include the damage data is in a standardized format as disclosed by McQuown et al. to allow easy and seamless integration of the repair recommendation with the railroad's work order system (McQuown et al. [para. 0028]), in a manner that yields predictable results.
Regarding Claim 10, Chen et al., Roddy et al., and McQuown et al. combined disclose the task tracking server system, wherein the instructions, when ready by the at least one processor, further cause the task tracking server system to: determine severity data based on the damage data, wherein the severity data indicates a severity of the damage to the component; and generate the damage data based on the severity data. (… an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage. Chen et al. [col. 27, lines 10-45]. … block 320 may use the damage determination to determine each of the parts of the target vehicle that needs to be replaced, each of the parts that need to be repaired (and potentially the type of repair), etc. The block 320 may identify particular types of work that need to be performed with respect to various ones of the body panels, such as repainting, replacing, filling creases or gaps, etc. Chen et al. [col. 29, lines 1-67; col. 30, lines 1-2; Fig. 3]).
Regarding Claim 11, Chen et al., Roddy et al., and McQuown et al. combined disclose the task tracking server system, wherein: the damage data comprises image data; and wherein the image data is processed using the second neural network machine classifier to generate an indication of a type of damage represented in the image data and a probabilistic likelihood that the damage to the component corresponds to the indicated type of damage. (The server 112, which may include a microprocessor 128 and a computer readable memory 129, may store one or more image processing training routines 130. … one or more of the training routines 130 may determine a set of correlation filters 132 (also referred to as difference filters) for each of the target objects, and one or more of the training routines 130 may determine a set of convolutional neural networks (CNNs) 134 for the objects represented by each of the base object models 120. … (such as to detect damage to automobiles or other vehicles). Chen et al. [col. 5, lines 7-45; col. 20, lines 35-40]. … the CNNs 134 or other deep learning tools may provide other possible outputs including, for example, … a confidence level with respect to prediction accuracy, etc. Chen et al. [col. 27, lines 10-45; col. 28, lines 1-45 (heatmap color indicating likelihood of damage of each component)]).
Regarding Claim 12, Chen et al., Roddy et al., and McQuown et al. combined disclose the task tracking server system, wherein the instructions, when ready by the at least one processor, further cause the task tracking server system to: determine that the probabilistic likelihood is below a threshold value; and based on the probabilistic likelihood being below the threshold value, obtain additional damage data for the component. ( … the server may determine the need for additional images depicting the target vehicle or additional information associated with the target vehicle. In particular, the server may determine whether any additional perspective(s) is needed, whether any components are not depicted in the portion of the set of images, whether the general image quality is too low, and/or whether the portion of the set of images is deficient in another regard(s). … the threshold criteria may account some combination of a sufficient amount and type of image perspective(s), a sufficient amount and type of identified vehicle components, a sufficient image quality, and/or other characteristics. … the damage that may be depicted in any additionally-captured images may confirm (or not confirm) the damage indicated by the telematics data. Chen et al. [col. 39, lines 44-67; col. 40, lines 1-67; col. 41, lines 1-23; Fig. 18A-18B]).
Regarding Claim 13, Chen et al., Roddy et al., and McQuown et al. combined disclose the task tracking server system, wherein the instructions, when ready by the at least one processor, further cause the task tracking server system to: determine a detail source corresponding to the damage data; and obtain the damage data from a third-party database based on the detail source. (… photos may be collected by, for example, owners of the automobiles depicted in the photos, an automobile insurer against whom an insurance claim was made for repairing or replacing the damaged automobile, etc. Chen et al. [col. 4, lines 13-67]. … The image collector block 202 may, as indicated in FIG. 2, receive raw images of the target object (also referred to interchangeably herein as “source images”) on which change detection is to occur, e.g., from a user, a database, or any other source. Such images may include still image or video images, and/or may include two-dimensional and/or three-dimensional images. Chen et al. [col. 8, lines 28-50]. … FIG. 4 illustrates a point cloud object model 402 of a vehicle, which in this case is a Toyota Camry® of a particular year, that may be used as a base object model for damaged or target vehicle images depicting a damaged Toyota Camry of the same year. The point cloud object model 402 may be stored in, for example, the vehicle image database 108 of FIG. 1, and may be obtained from, for example, the automobile manufacturer, or from any other source. Chen et al. [col. 16, lines 14-45]).
Regarding Claim 14, Chen et al., Roddy et al., and McQuown et al. combined disclose the task tracking server system, wherein the instructions, when ready by the at least one processor, further cause the task tracking server system to determine ownership data from the third-party database based on the detail source. (… the block 202 may receive indications of where damage occurred to the automobile or other object as inputs from a user, such as the owner of the automobile, from insurance claims information, etc. In one example, the block 202 may receive first notice of loss (FNOL) information, which is generally information collected by an insurance carrier at the first notice of loss, e.g., when the owner first notifies the carrier of a claim or a potential claim. Chen et al. [col. 9, lines 13-35; Fig. 1-2]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Cunningham (US 12,026,602) - a computer-implemented method of determining a claim value corresponding to a damaged vehicle includes receiving exception data corresponding to the damaged vehicle, receiving one or more image corresponding to the damaged vehicle, generating a set of image parameters by analyzing the one or more image corresponding to the damaged vehicle using a first trained artificial neural network, generating the claim value corresponding to the damaged vehicle by analyzing the set of image parameters and the exception data using a second trained artificial neural network, and transmitting the claim value.
Tang et al. (US 2019/0279292) - A system for processing an image including a vehicle using machine learning can include a processor in communication with a client device, and a storage medium storing instructions that, when executed, cause the processor to perform operations including: receiving an image of a vehicle from the client device; extracting one or more features from the image; based on the extracted features and using a machine learning algorithm, determining a make and a model of the vehicle; obtaining user information relating to a financing request for the vehicle; determining a real-time quote for the vehicle based on the make, the model, and the user information; and transmitting the real-time quote for display on the client device.
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LETORIA G KNIGHT whose telephone number is (571)270-0485. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao WU can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.G.K/Examiner, Art Unit 3623 /RUTAO WU/Supervisory Patent Examiner, Art Unit 3623